Feb 23 18:33:23 crc systemd[1]: Starting Kubernetes Kubelet... Feb 23 18:33:23 crc restorecon[4683]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:23 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 18:33:24 crc restorecon[4683]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 23 18:33:24 crc restorecon[4683]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 23 18:33:25 crc kubenswrapper[4768]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 18:33:25 crc kubenswrapper[4768]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 23 18:33:25 crc kubenswrapper[4768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 18:33:25 crc kubenswrapper[4768]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 18:33:25 crc kubenswrapper[4768]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 23 18:33:25 crc kubenswrapper[4768]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.040295 4768 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.047815 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.047904 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.047922 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.047934 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.047944 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.047952 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.047961 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.047970 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.047979 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.047987 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.047996 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048004 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048013 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048021 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048028 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048036 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048043 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048051 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048059 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048067 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048074 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048082 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048089 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048098 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048105 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048113 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048120 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048128 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048138 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048145 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048153 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048160 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048168 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048176 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048184 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048194 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048203 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048211 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048219 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048230 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048239 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048275 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048286 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048298 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048308 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048320 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048328 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048336 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048343 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048351 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048359 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048366 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048374 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048382 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048390 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048398 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048406 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048416 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048426 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048435 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048443 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048451 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048459 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048468 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048479 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048489 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048497 4768 feature_gate.go:330] unrecognized feature gate: Example Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048509 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048517 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048526 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.048535 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048691 4768 flags.go:64] FLAG: --address="0.0.0.0" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048712 4768 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048729 4768 flags.go:64] FLAG: --anonymous-auth="true" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048742 4768 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048756 4768 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048765 4768 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048777 4768 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048789 4768 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048799 4768 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048809 4768 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048818 4768 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048828 4768 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048838 4768 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048847 4768 flags.go:64] FLAG: --cgroup-root="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048856 4768 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048865 4768 flags.go:64] FLAG: --client-ca-file="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048874 4768 flags.go:64] FLAG: --cloud-config="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048883 4768 flags.go:64] FLAG: --cloud-provider="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048891 4768 flags.go:64] FLAG: --cluster-dns="[]" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048902 4768 flags.go:64] FLAG: --cluster-domain="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048912 4768 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048921 4768 flags.go:64] FLAG: --config-dir="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048930 4768 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048941 4768 flags.go:64] FLAG: --container-log-max-files="5" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.048990 4768 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049001 4768 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049010 4768 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049019 4768 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049028 4768 flags.go:64] FLAG: --contention-profiling="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049038 4768 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049047 4768 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049057 4768 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049068 4768 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049083 4768 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049094 4768 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049106 4768 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049117 4768 flags.go:64] FLAG: --enable-load-reader="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049129 4768 flags.go:64] FLAG: --enable-server="true" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049139 4768 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049156 4768 flags.go:64] FLAG: --event-burst="100" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049169 4768 flags.go:64] FLAG: --event-qps="50" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049180 4768 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049191 4768 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049202 4768 flags.go:64] FLAG: --eviction-hard="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049214 4768 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049223 4768 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049233 4768 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049243 4768 flags.go:64] FLAG: --eviction-soft="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049282 4768 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049291 4768 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049301 4768 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049310 4768 flags.go:64] FLAG: --experimental-mounter-path="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049320 4768 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049329 4768 flags.go:64] FLAG: --fail-swap-on="true" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049338 4768 flags.go:64] FLAG: --feature-gates="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049358 4768 flags.go:64] FLAG: --file-check-frequency="20s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049367 4768 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049378 4768 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049387 4768 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049396 4768 flags.go:64] FLAG: --healthz-port="10248" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049405 4768 flags.go:64] FLAG: --help="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049414 4768 flags.go:64] FLAG: --hostname-override="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049423 4768 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049432 4768 flags.go:64] FLAG: --http-check-frequency="20s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049441 4768 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049450 4768 flags.go:64] FLAG: --image-credential-provider-config="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049459 4768 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049468 4768 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049478 4768 flags.go:64] FLAG: --image-service-endpoint="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049487 4768 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049496 4768 flags.go:64] FLAG: --kube-api-burst="100" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049505 4768 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049515 4768 flags.go:64] FLAG: --kube-api-qps="50" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049524 4768 flags.go:64] FLAG: --kube-reserved="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049533 4768 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049542 4768 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049551 4768 flags.go:64] FLAG: --kubelet-cgroups="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049560 4768 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049570 4768 flags.go:64] FLAG: --lock-file="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049580 4768 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049590 4768 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049599 4768 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049613 4768 flags.go:64] FLAG: --log-json-split-stream="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049624 4768 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049633 4768 flags.go:64] FLAG: --log-text-split-stream="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049642 4768 flags.go:64] FLAG: --logging-format="text" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049652 4768 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049661 4768 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049672 4768 flags.go:64] FLAG: --manifest-url="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049681 4768 flags.go:64] FLAG: --manifest-url-header="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049693 4768 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049702 4768 flags.go:64] FLAG: --max-open-files="1000000" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049713 4768 flags.go:64] FLAG: --max-pods="110" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049723 4768 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049732 4768 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049742 4768 flags.go:64] FLAG: --memory-manager-policy="None" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049751 4768 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049760 4768 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049769 4768 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049779 4768 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049799 4768 flags.go:64] FLAG: --node-status-max-images="50" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049808 4768 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049817 4768 flags.go:64] FLAG: --oom-score-adj="-999" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049826 4768 flags.go:64] FLAG: --pod-cidr="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049835 4768 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049849 4768 flags.go:64] FLAG: --pod-manifest-path="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049858 4768 flags.go:64] FLAG: --pod-max-pids="-1" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049867 4768 flags.go:64] FLAG: --pods-per-core="0" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049877 4768 flags.go:64] FLAG: --port="10250" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049886 4768 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049895 4768 flags.go:64] FLAG: --provider-id="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049904 4768 flags.go:64] FLAG: --qos-reserved="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049913 4768 flags.go:64] FLAG: --read-only-port="10255" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049922 4768 flags.go:64] FLAG: --register-node="true" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049931 4768 flags.go:64] FLAG: --register-schedulable="true" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049941 4768 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049957 4768 flags.go:64] FLAG: --registry-burst="10" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049969 4768 flags.go:64] FLAG: --registry-qps="5" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049979 4768 flags.go:64] FLAG: --reserved-cpus="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.049991 4768 flags.go:64] FLAG: --reserved-memory="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050005 4768 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050017 4768 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050028 4768 flags.go:64] FLAG: --rotate-certificates="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050039 4768 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050050 4768 flags.go:64] FLAG: --runonce="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050061 4768 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050073 4768 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050083 4768 flags.go:64] FLAG: --seccomp-default="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050092 4768 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050101 4768 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050112 4768 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050123 4768 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050135 4768 flags.go:64] FLAG: --storage-driver-password="root" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050180 4768 flags.go:64] FLAG: --storage-driver-secure="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050193 4768 flags.go:64] FLAG: --storage-driver-table="stats" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050205 4768 flags.go:64] FLAG: --storage-driver-user="root" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050216 4768 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050227 4768 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050236 4768 flags.go:64] FLAG: --system-cgroups="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050276 4768 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050298 4768 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050309 4768 flags.go:64] FLAG: --tls-cert-file="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050321 4768 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050334 4768 flags.go:64] FLAG: --tls-min-version="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050345 4768 flags.go:64] FLAG: --tls-private-key-file="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050357 4768 flags.go:64] FLAG: --topology-manager-policy="none" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050366 4768 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050374 4768 flags.go:64] FLAG: --topology-manager-scope="container" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050383 4768 flags.go:64] FLAG: --v="2" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050396 4768 flags.go:64] FLAG: --version="false" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050407 4768 flags.go:64] FLAG: --vmodule="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050420 4768 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.050429 4768 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050645 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050656 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050665 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050674 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050682 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050693 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050702 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050711 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050722 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050732 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050742 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050752 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050768 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050776 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050784 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050792 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050800 4768 feature_gate.go:330] unrecognized feature gate: Example Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050808 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050816 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050825 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050832 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050840 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050848 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050856 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050864 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050872 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050880 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050888 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050895 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050903 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050911 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050919 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050927 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050934 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050944 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050952 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050960 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050967 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050975 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050983 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.050991 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051000 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051009 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051019 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051037 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051047 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051056 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051064 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051074 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051084 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051094 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051104 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051112 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051120 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051128 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051136 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051145 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051155 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051165 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051175 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051185 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051198 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051212 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051223 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051233 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051242 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051285 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051296 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051308 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051318 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.051345 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.051372 4768 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.068343 4768 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.068748 4768 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.068939 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.068961 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.068974 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.068988 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069001 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069012 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069023 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069034 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069045 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069056 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069067 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069078 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069089 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069100 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069111 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069123 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069134 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069145 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069156 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069169 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069180 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069191 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069202 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069212 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069223 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069235 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069290 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069314 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069327 4768 feature_gate.go:330] unrecognized feature gate: Example Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069340 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069353 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069366 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069377 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069389 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069401 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069413 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069425 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069439 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069452 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069467 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069479 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069491 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069503 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069514 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069526 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069537 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069550 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069566 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069580 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069591 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069603 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069614 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069627 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069639 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069650 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069661 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069673 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069683 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069695 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069706 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069717 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069728 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069744 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069761 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069776 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069790 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069803 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069816 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069829 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069842 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.069854 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.069876 4768 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070286 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070315 4768 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070329 4768 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070341 4768 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070353 4768 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070365 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070376 4768 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070388 4768 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070399 4768 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070411 4768 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070422 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070433 4768 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070444 4768 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070457 4768 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070469 4768 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070480 4768 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070495 4768 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070512 4768 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070526 4768 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070539 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070551 4768 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070563 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070574 4768 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070586 4768 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070599 4768 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070611 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070622 4768 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070634 4768 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070645 4768 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070657 4768 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070670 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070682 4768 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070694 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070706 4768 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070717 4768 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070729 4768 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070740 4768 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070754 4768 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070769 4768 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070782 4768 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070794 4768 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070806 4768 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070817 4768 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070828 4768 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070839 4768 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070850 4768 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070861 4768 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070873 4768 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070883 4768 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070899 4768 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070912 4768 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070926 4768 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070938 4768 feature_gate.go:330] unrecognized feature gate: Example Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070950 4768 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070963 4768 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070977 4768 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.070991 4768 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071005 4768 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071016 4768 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071027 4768 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071038 4768 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071049 4768 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071060 4768 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071074 4768 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071085 4768 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071097 4768 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071109 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071119 4768 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071131 4768 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071142 4768 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.071153 4768 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.071170 4768 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.071588 4768 server.go:940] "Client rotation is on, will bootstrap in background" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.079709 4768 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.079866 4768 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.081995 4768 server.go:997] "Starting client certificate rotation" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.082057 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.082390 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-23 16:10:02.871654651 +0000 UTC Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.082526 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.109723 4768 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.112047 4768 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.112909 4768 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.129117 4768 log.go:25] "Validated CRI v1 runtime API" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.170071 4768 log.go:25] "Validated CRI v1 image API" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.172590 4768 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.178977 4768 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-23-18-29-03-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.179359 4768 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.208499 4768 manager.go:217] Machine: {Timestamp:2026-02-23 18:33:25.204433374 +0000 UTC m=+0.594919214 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:43a108d7-6740-4b29-827b-176ca14f7e0c BootID:572e458a-3489-410c-99b8-d0bc0a8b7420 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9c:c5:9c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9c:c5:9c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a5:c4:fa Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:aa:6e:22 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:e5:b2:23 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:1a:a1:ba Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7e:cb:c9:fb:31:dc Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fa:59:66:fd:a2:bb Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.208841 4768 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.209099 4768 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.211180 4768 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.211596 4768 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.211664 4768 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.213947 4768 topology_manager.go:138] "Creating topology manager with none policy" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.214000 4768 container_manager_linux.go:303] "Creating device plugin manager" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.214489 4768 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.214538 4768 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.214803 4768 state_mem.go:36] "Initialized new in-memory state store" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.214943 4768 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.218577 4768 kubelet.go:418] "Attempting to sync node with API server" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.218614 4768 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.218643 4768 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.218666 4768 kubelet.go:324] "Adding apiserver pod source" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.218714 4768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.223111 4768 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.224437 4768 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.226139 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.226346 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.226367 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.226506 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.226931 4768 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.229088 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.229134 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.229150 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.229166 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.229190 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.229204 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.229218 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.229242 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.229286 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.229301 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.229342 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.229356 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.230832 4768 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.231538 4768 server.go:1280] "Started kubelet" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.232565 4768 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.232571 4768 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.233240 4768 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.233758 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:25 crc systemd[1]: Started Kubernetes Kubelet. Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.236354 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.236503 4768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.236888 4768 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.236925 4768 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.237122 4768 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.237982 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.237765 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:29:09.937757844 +0000 UTC Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.239402 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="200ms" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.239697 4768 factory.go:55] Registering systemd factory Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.239723 4768 factory.go:221] Registration of the systemd container factory successfully Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.240100 4768 factory.go:153] Registering CRI-O factory Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.240265 4768 factory.go:221] Registration of the crio container factory successfully Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.240430 4768 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.240887 4768 factory.go:103] Registering Raw factory Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.241051 4768 manager.go:1196] Started watching for new ooms in manager Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.239398 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.115:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1896f3d6516cac9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 18:33:25.231496346 +0000 UTC m=+0.621982186,LastTimestamp:2026-02-23 18:33:25.231496346 +0000 UTC m=+0.621982186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.240952 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.241740 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.250273 4768 manager.go:319] Starting recovery of all containers Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.251288 4768 server.go:460] "Adding debug handlers to kubelet server" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.259954 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260046 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260078 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260099 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260119 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260173 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260201 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260222 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260278 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260320 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260339 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260357 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260376 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260398 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260417 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260438 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260459 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260478 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260526 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260545 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260566 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260585 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260603 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260620 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260638 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260656 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260679 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260701 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260732 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260760 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260788 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260816 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260888 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260922 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260959 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.260985 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.261006 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263163 4768 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263218 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263243 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263326 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263348 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263366 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263390 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263410 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263429 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263449 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263467 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263485 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263503 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263523 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263544 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263565 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263592 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263615 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263638 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263658 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263678 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263720 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263749 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263769 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263789 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263808 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263829 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263850 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263870 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263892 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263920 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263948 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.263973 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264002 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264028 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264046 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264065 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264085 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264120 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264141 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264162 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264181 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264201 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264220 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264240 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264295 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264317 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264336 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264356 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264375 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264395 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264414 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264433 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264452 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264478 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264498 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264519 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264538 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264557 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264577 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264596 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264616 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264640 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264663 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264685 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264706 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264726 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264745 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264778 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264800 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264826 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264848 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264868 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264889 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264918 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264943 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264973 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.264998 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265026 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265050 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265074 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265131 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265149 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265172 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265192 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265212 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265232 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265280 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265302 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265323 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265348 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265372 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265396 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265421 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265447 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265473 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265495 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265522 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265547 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265571 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265596 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265623 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265652 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265677 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265700 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265718 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265738 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265758 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265778 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265796 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265813 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265831 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265849 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265868 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265889 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265912 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265929 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265947 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265966 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.265983 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266000 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266016 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266093 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266115 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266134 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266151 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266170 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266189 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266206 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266224 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266243 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266290 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266309 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266328 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266348 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266368 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266386 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266404 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266422 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266440 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266459 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266478 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266496 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266514 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266533 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266555 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266579 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266606 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266635 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266670 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266696 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266716 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266736 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266754 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266773 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266792 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266812 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266829 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266847 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266863 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266882 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266908 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266931 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266959 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.266977 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.267047 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.267128 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.267156 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.267179 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.267203 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.267225 4768 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.267284 4768 reconstruct.go:97] "Volume reconstruction finished" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.267300 4768 reconciler.go:26] "Reconciler: start to sync state" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.285154 4768 manager.go:324] Recovery completed Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.296010 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.298722 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.298974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.299070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.300549 4768 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.300608 4768 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.300643 4768 state_mem.go:36] "Initialized new in-memory state store" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.303109 4768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.306175 4768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.306273 4768 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.306315 4768 kubelet.go:2335] "Starting kubelet main sync loop" Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.306395 4768 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.307371 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.307423 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.313584 4768 policy_none.go:49] "None policy: Start" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.314597 4768 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.314730 4768 state_mem.go:35] "Initializing new in-memory state store" Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.338899 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.376671 4768 manager.go:334] "Starting Device Plugin manager" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.376797 4768 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.376825 4768 server.go:79] "Starting device plugin registration server" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.377504 4768 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.377537 4768 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.377871 4768 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.378048 4768 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.378073 4768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.394912 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.407066 4768 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.407351 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.409000 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.409059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.409077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.409286 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.409730 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.409836 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.410595 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.410643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.410657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.410843 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.411060 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.411132 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.412166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.412198 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.412202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.412212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.412232 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.412282 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.412224 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.412401 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.412422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.412439 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.412449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.412432 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.413427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.413663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.413586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.413727 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.413744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.413684 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.414317 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.414350 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.414382 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.415540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.415566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.415570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.415599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.415616 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.415578 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.415890 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.415945 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.417359 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.417435 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.417447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.441071 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="400ms" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.470214 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.470285 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.470314 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.470333 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.470513 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.470598 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.470673 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.470759 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.470806 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.470834 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.470860 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.470893 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.470970 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.471062 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.471106 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.477669 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.478804 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.478860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.478897 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.478965 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.479821 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.572744 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.572807 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.572841 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.572872 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.572904 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.572937 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.572957 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.572972 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.572999 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573048 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573045 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573107 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573220 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573232 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573281 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573237 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573287 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573348 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573437 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573496 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573534 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573640 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573688 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.573749 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.574075 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.574120 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.574122 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.574135 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.574149 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.574175 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.680610 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.682430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.682599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.682703 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.682843 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.683698 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.748845 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.773852 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.788324 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.799236 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-7d75f544e23baf6027ecb02e54da4278e1b7de8f06bef0c402292f7fb09913f4 WatchSource:0}: Error finding container 7d75f544e23baf6027ecb02e54da4278e1b7de8f06bef0c402292f7fb09913f4: Status 404 returned error can't find the container with id 7d75f544e23baf6027ecb02e54da4278e1b7de8f06bef0c402292f7fb09913f4 Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.814717 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: I0223 18:33:25.823474 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.835677 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a8ea1626e1d47947e78df405f8ca4a05ffc8e1b09fdfdca4fba3745036d282b3 WatchSource:0}: Error finding container a8ea1626e1d47947e78df405f8ca4a05ffc8e1b09fdfdca4fba3745036d282b3: Status 404 returned error can't find the container with id a8ea1626e1d47947e78df405f8ca4a05ffc8e1b09fdfdca4fba3745036d282b3 Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.842164 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="800ms" Feb 23 18:33:25 crc kubenswrapper[4768]: W0223 18:33:25.850494 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-abc1e2c5da880862c89a71a3002a6efacf27209e43631e074b331e1fd7348b62 WatchSource:0}: Error finding container abc1e2c5da880862c89a71a3002a6efacf27209e43631e074b331e1fd7348b62: Status 404 returned error can't find the container with id abc1e2c5da880862c89a71a3002a6efacf27209e43631e074b331e1fd7348b62 Feb 23 18:33:25 crc kubenswrapper[4768]: E0223 18:33:25.933015 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.115:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1896f3d6516cac9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 18:33:25.231496346 +0000 UTC m=+0.621982186,LastTimestamp:2026-02-23 18:33:25.231496346 +0000 UTC m=+0.621982186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.083814 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.085470 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.085510 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.085521 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.085545 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 18:33:26 crc kubenswrapper[4768]: E0223 18:33:26.085886 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Feb 23 18:33:26 crc kubenswrapper[4768]: W0223 18:33:26.143603 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:26 crc kubenswrapper[4768]: E0223 18:33:26.143710 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.234793 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.239907 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 22:02:11.185872547 +0000 UTC Feb 23 18:33:26 crc kubenswrapper[4768]: W0223 18:33:26.295517 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:26 crc kubenswrapper[4768]: E0223 18:33:26.295629 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.314076 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"520d7a0b5b0dcefdf0eda306f0ac4e82fe23190f90d503c99e7aa7b911f47b9a"} Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.315315 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d75f544e23baf6027ecb02e54da4278e1b7de8f06bef0c402292f7fb09913f4"} Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.317303 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"abc1e2c5da880862c89a71a3002a6efacf27209e43631e074b331e1fd7348b62"} Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.318155 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a8ea1626e1d47947e78df405f8ca4a05ffc8e1b09fdfdca4fba3745036d282b3"} Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.319093 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"068748d2571ff38e73d2d9beb7f26a6cb522edb748c8b452049b93baac74a759"} Feb 23 18:33:26 crc kubenswrapper[4768]: W0223 18:33:26.421058 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:26 crc kubenswrapper[4768]: E0223 18:33:26.421213 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Feb 23 18:33:26 crc kubenswrapper[4768]: W0223 18:33:26.606618 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:26 crc kubenswrapper[4768]: E0223 18:33:26.606713 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Feb 23 18:33:26 crc kubenswrapper[4768]: E0223 18:33:26.643697 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="1.6s" Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.886939 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.888451 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.888493 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.888502 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:26 crc kubenswrapper[4768]: I0223 18:33:26.888532 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 18:33:26 crc kubenswrapper[4768]: E0223 18:33:26.889071 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.200800 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 18:33:27 crc kubenswrapper[4768]: E0223 18:33:27.202669 4768 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.235177 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.240535 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:46:47.506478844 +0000 UTC Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.325638 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d" exitCode=0 Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.325828 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d"} Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.326186 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.329150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.329219 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.329238 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.332771 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.332880 4768 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567" exitCode=0 Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.333013 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567"} Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.333150 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.334722 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.334779 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.334800 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.335547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.335599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.335617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.336442 4768 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14" exitCode=0 Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.336561 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14"} Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.336536 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.338019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.338053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.338096 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.342238 4768 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d" exitCode=0 Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.342373 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d"} Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.342530 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.344724 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.344781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.344798 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.348057 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f"} Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.348131 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430"} Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.348152 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cd9e5b89eb2f56f768b5231eb898f85016aa8d6894f1c03778b9aa62a7ba3bbc"} Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.348171 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54"} Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.348331 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.349633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.349680 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.349698 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:27 crc kubenswrapper[4768]: I0223 18:33:27.968934 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:28 crc kubenswrapper[4768]: W0223 18:33:28.217603 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:28 crc kubenswrapper[4768]: E0223 18:33:28.217700 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.235330 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.241079 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 14:40:17.274998698 +0000 UTC Feb 23 18:33:28 crc kubenswrapper[4768]: E0223 18:33:28.244946 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="3.2s" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.353953 4768 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f" exitCode=0 Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.354172 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f"} Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.354346 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.355432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.355460 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.355470 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.356743 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.356739 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd"} Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.357524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.357554 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.357566 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.360225 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c7ce49ba410b6075055afc3f7f1ec5d046d9cca0e07b108d29e151851df355dd"} Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.360271 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9dd3d8876e35ff0cfeb97d286b0cf18bd2517d4a7359ca2f8b98a267171e5aa7"} Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.360283 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"83d80f628682805293fe75a0fa82f948179ec7cdb28b5376049360588801fc29"} Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.360319 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.361411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.361437 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.361450 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.363390 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.363708 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86"} Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.363747 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f"} Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.363757 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131"} Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.363768 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a"} Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.364441 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.364502 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.364516 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.490232 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.491613 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.491662 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.491671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:28 crc kubenswrapper[4768]: I0223 18:33:28.491702 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 18:33:28 crc kubenswrapper[4768]: E0223 18:33:28.492135 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.115:6443: connect: connection refused" node="crc" Feb 23 18:33:28 crc kubenswrapper[4768]: W0223 18:33:28.588783 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:28 crc kubenswrapper[4768]: E0223 18:33:28.588891 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Feb 23 18:33:28 crc kubenswrapper[4768]: W0223 18:33:28.849435 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.115:6443: connect: connection refused Feb 23 18:33:28 crc kubenswrapper[4768]: E0223 18:33:28.849571 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.115:6443: connect: connection refused" logger="UnhandledError" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.241914 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 18:23:49.00317701 +0000 UTC Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.369085 4768 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579" exitCode=0 Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.369177 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579"} Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.369216 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.370331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.370466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.370545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.374110 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.374193 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.375141 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.375439 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.375530 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.375424 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"40d7a2296be47d1f4c705166a66751f3d0f4dd08ce9172142d4446264931e3ee"} Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.376559 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.376636 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.376677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.376733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.376760 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.376769 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.376870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.376900 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.376916 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.378033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.378070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:29 crc kubenswrapper[4768]: I0223 18:33:29.378080 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.022829 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.242291 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:42:13.19572562 +0000 UTC Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.381844 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9"} Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.381915 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff"} Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.381929 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a"} Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.381938 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.382033 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.381943 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94"} Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.382151 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.383058 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.383105 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.383127 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.383344 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.383386 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.383400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.723330 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.970135 4768 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 18:33:30 crc kubenswrapper[4768]: I0223 18:33:30.970320 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.243506 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:38:20.865571121 +0000 UTC Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.285873 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.394354 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7"} Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.394459 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.394463 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.396062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.396106 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.396124 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.396167 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.396216 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.396235 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.692625 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.694728 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.694805 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.694826 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.694868 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.950604 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.950902 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.952857 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.952912 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:31 crc kubenswrapper[4768]: I0223 18:33:31.952930 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:32 crc kubenswrapper[4768]: I0223 18:33:32.244229 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 22:25:39.307375594 +0000 UTC Feb 23 18:33:32 crc kubenswrapper[4768]: I0223 18:33:32.398382 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:32 crc kubenswrapper[4768]: I0223 18:33:32.398421 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:32 crc kubenswrapper[4768]: I0223 18:33:32.400382 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:32 crc kubenswrapper[4768]: I0223 18:33:32.400454 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:32 crc kubenswrapper[4768]: I0223 18:33:32.400387 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:32 crc kubenswrapper[4768]: I0223 18:33:32.400481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:32 crc kubenswrapper[4768]: I0223 18:33:32.400524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:32 crc kubenswrapper[4768]: I0223 18:33:32.400548 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:32 crc kubenswrapper[4768]: I0223 18:33:32.997303 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 23 18:33:33 crc kubenswrapper[4768]: I0223 18:33:33.245091 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:23:08.763486897 +0000 UTC Feb 23 18:33:33 crc kubenswrapper[4768]: I0223 18:33:33.401577 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:33 crc kubenswrapper[4768]: I0223 18:33:33.402826 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:33 crc kubenswrapper[4768]: I0223 18:33:33.402883 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:33 crc kubenswrapper[4768]: I0223 18:33:33.402905 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:33 crc kubenswrapper[4768]: I0223 18:33:33.724484 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:33 crc kubenswrapper[4768]: I0223 18:33:33.724741 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:33 crc kubenswrapper[4768]: I0223 18:33:33.726197 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:33 crc kubenswrapper[4768]: I0223 18:33:33.726282 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:33 crc kubenswrapper[4768]: I0223 18:33:33.726305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:34 crc kubenswrapper[4768]: I0223 18:33:34.245873 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 19:13:18.31309538 +0000 UTC Feb 23 18:33:35 crc kubenswrapper[4768]: I0223 18:33:35.174710 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:35 crc kubenswrapper[4768]: I0223 18:33:35.175086 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:35 crc kubenswrapper[4768]: I0223 18:33:35.177000 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:35 crc kubenswrapper[4768]: I0223 18:33:35.177053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:35 crc kubenswrapper[4768]: I0223 18:33:35.177062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:35 crc kubenswrapper[4768]: I0223 18:33:35.184072 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:35 crc kubenswrapper[4768]: I0223 18:33:35.246565 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 10:38:58.619615148 +0000 UTC Feb 23 18:33:35 crc kubenswrapper[4768]: E0223 18:33:35.395528 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 18:33:35 crc kubenswrapper[4768]: I0223 18:33:35.409307 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:35 crc kubenswrapper[4768]: I0223 18:33:35.410818 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:35 crc kubenswrapper[4768]: I0223 18:33:35.410901 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:35 crc kubenswrapper[4768]: I0223 18:33:35.410936 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:35 crc kubenswrapper[4768]: I0223 18:33:35.611391 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:36 crc kubenswrapper[4768]: I0223 18:33:36.246981 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:23:51.888507878 +0000 UTC Feb 23 18:33:36 crc kubenswrapper[4768]: I0223 18:33:36.413156 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:36 crc kubenswrapper[4768]: I0223 18:33:36.414821 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:36 crc kubenswrapper[4768]: I0223 18:33:36.414885 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:36 crc kubenswrapper[4768]: I0223 18:33:36.414902 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:36 crc kubenswrapper[4768]: I0223 18:33:36.421171 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:37 crc kubenswrapper[4768]: I0223 18:33:37.247681 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:56:38.995376628 +0000 UTC Feb 23 18:33:37 crc kubenswrapper[4768]: I0223 18:33:37.416131 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:37 crc kubenswrapper[4768]: I0223 18:33:37.417799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:37 crc kubenswrapper[4768]: I0223 18:33:37.417852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:37 crc kubenswrapper[4768]: I0223 18:33:37.417873 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:38 crc kubenswrapper[4768]: I0223 18:33:38.247948 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 09:13:14.38577928 +0000 UTC Feb 23 18:33:39 crc kubenswrapper[4768]: W0223 18:33:39.056341 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.056455 4768 trace.go:236] Trace[479988240]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Feb-2026 18:33:29.054) (total time: 10001ms): Feb 23 18:33:39 crc kubenswrapper[4768]: Trace[479988240]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:33:39.056) Feb 23 18:33:39 crc kubenswrapper[4768]: Trace[479988240]: [10.001914695s] [10.001914695s] END Feb 23 18:33:39 crc kubenswrapper[4768]: E0223 18:33:39.056482 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.204390 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.204611 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.205809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.205864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.205879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.236570 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.248844 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 07:51:16.965957085 +0000 UTC Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.255928 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.428740 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.431125 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.431186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.431200 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.449526 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 23 18:33:39 crc kubenswrapper[4768]: W0223 18:33:39.813737 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:39Z is after 2026-02-23T05:33:13Z Feb 23 18:33:39 crc kubenswrapper[4768]: E0223 18:33:39.813849 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:33:39 crc kubenswrapper[4768]: W0223 18:33:39.816996 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:39Z is after 2026-02-23T05:33:13Z Feb 23 18:33:39 crc kubenswrapper[4768]: E0223 18:33:39.817063 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:33:39 crc kubenswrapper[4768]: W0223 18:33:39.826773 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:39Z is after 2026-02-23T05:33:13Z Feb 23 18:33:39 crc kubenswrapper[4768]: E0223 18:33:39.826864 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.827105 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.827238 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 23 18:33:39 crc kubenswrapper[4768]: E0223 18:33:39.830369 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:39Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 23 18:33:39 crc kubenswrapper[4768]: E0223 18:33:39.830836 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:39Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 18:33:39 crc kubenswrapper[4768]: E0223 18:33:39.831369 4768 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:39Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.834167 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 23 18:33:39 crc kubenswrapper[4768]: E0223 18:33:39.834151 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:39Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896f3d6516cac9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 18:33:25.231496346 +0000 UTC m=+0.621982186,LastTimestamp:2026-02-23 18:33:25.231496346 +0000 UTC m=+0.621982186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 18:33:39 crc kubenswrapper[4768]: I0223 18:33:39.834271 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.238013 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:40Z is after 2026-02-23T05:33:13Z Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.249089 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 13:45:16.286295921 +0000 UTC Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.434693 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.437606 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="40d7a2296be47d1f4c705166a66751f3d0f4dd08ce9172142d4446264931e3ee" exitCode=255 Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.437766 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.437728 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"40d7a2296be47d1f4c705166a66751f3d0f4dd08ce9172142d4446264931e3ee"} Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.438190 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.439490 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.439547 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.439563 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.439505 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.439627 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.439638 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.440155 4768 scope.go:117] "RemoveContainer" containerID="40d7a2296be47d1f4c705166a66751f3d0f4dd08ce9172142d4446264931e3ee" Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.730722 4768 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]log ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]etcd ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/generic-apiserver-start-informers ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/priority-and-fairness-filter ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/start-apiextensions-informers ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/start-apiextensions-controllers ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/crd-informer-synced ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/start-system-namespaces-controller ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 23 18:33:40 crc kubenswrapper[4768]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 23 18:33:40 crc kubenswrapper[4768]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/bootstrap-controller ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/start-kube-aggregator-informers ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/apiservice-registration-controller ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/apiservice-discovery-controller ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]autoregister-completion ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/apiservice-openapi-controller ok Feb 23 18:33:40 crc kubenswrapper[4768]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 23 18:33:40 crc kubenswrapper[4768]: livez check failed Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.730818 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.969768 4768 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 18:33:40 crc kubenswrapper[4768]: I0223 18:33:40.969874 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 18:33:41 crc kubenswrapper[4768]: I0223 18:33:41.238469 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:41Z is after 2026-02-23T05:33:13Z Feb 23 18:33:41 crc kubenswrapper[4768]: I0223 18:33:41.249565 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:38:42.918542511 +0000 UTC Feb 23 18:33:41 crc kubenswrapper[4768]: I0223 18:33:41.443221 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 18:33:41 crc kubenswrapper[4768]: I0223 18:33:41.444368 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 23 18:33:41 crc kubenswrapper[4768]: I0223 18:33:41.446767 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="99eda65037ea744d1e2ef35a1f4903ddfdcc055c4624f5885245abe8a8cfbda5" exitCode=255 Feb 23 18:33:41 crc kubenswrapper[4768]: I0223 18:33:41.446812 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"99eda65037ea744d1e2ef35a1f4903ddfdcc055c4624f5885245abe8a8cfbda5"} Feb 23 18:33:41 crc kubenswrapper[4768]: I0223 18:33:41.446859 4768 scope.go:117] "RemoveContainer" containerID="40d7a2296be47d1f4c705166a66751f3d0f4dd08ce9172142d4446264931e3ee" Feb 23 18:33:41 crc kubenswrapper[4768]: I0223 18:33:41.447030 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:41 crc kubenswrapper[4768]: I0223 18:33:41.448231 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:41 crc kubenswrapper[4768]: I0223 18:33:41.448273 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:41 crc kubenswrapper[4768]: I0223 18:33:41.448284 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:41 crc kubenswrapper[4768]: I0223 18:33:41.448923 4768 scope.go:117] "RemoveContainer" containerID="99eda65037ea744d1e2ef35a1f4903ddfdcc055c4624f5885245abe8a8cfbda5" Feb 23 18:33:41 crc kubenswrapper[4768]: E0223 18:33:41.449152 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:33:42 crc kubenswrapper[4768]: I0223 18:33:42.236718 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:42Z is after 2026-02-23T05:33:13Z Feb 23 18:33:42 crc kubenswrapper[4768]: I0223 18:33:42.250135 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:12:09.82553161 +0000 UTC Feb 23 18:33:42 crc kubenswrapper[4768]: I0223 18:33:42.453773 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 18:33:43 crc kubenswrapper[4768]: I0223 18:33:43.239730 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:43Z is after 2026-02-23T05:33:13Z Feb 23 18:33:43 crc kubenswrapper[4768]: I0223 18:33:43.250889 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 23:26:45.798730712 +0000 UTC Feb 23 18:33:44 crc kubenswrapper[4768]: I0223 18:33:44.238456 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:44Z is after 2026-02-23T05:33:13Z Feb 23 18:33:44 crc kubenswrapper[4768]: I0223 18:33:44.251775 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 05:36:45.056634284 +0000 UTC Feb 23 18:33:44 crc kubenswrapper[4768]: W0223 18:33:44.800760 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:44Z is after 2026-02-23T05:33:13Z Feb 23 18:33:44 crc kubenswrapper[4768]: E0223 18:33:44.800887 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:44Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:33:45 crc kubenswrapper[4768]: I0223 18:33:45.239974 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:45Z is after 2026-02-23T05:33:13Z Feb 23 18:33:45 crc kubenswrapper[4768]: I0223 18:33:45.252162 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 04:42:23.488763035 +0000 UTC Feb 23 18:33:45 crc kubenswrapper[4768]: E0223 18:33:45.396350 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 18:33:45 crc kubenswrapper[4768]: I0223 18:33:45.732528 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:45 crc kubenswrapper[4768]: I0223 18:33:45.732791 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:45 crc kubenswrapper[4768]: I0223 18:33:45.734413 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:45 crc kubenswrapper[4768]: I0223 18:33:45.734447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:45 crc kubenswrapper[4768]: I0223 18:33:45.734458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:45 crc kubenswrapper[4768]: I0223 18:33:45.735038 4768 scope.go:117] "RemoveContainer" containerID="99eda65037ea744d1e2ef35a1f4903ddfdcc055c4624f5885245abe8a8cfbda5" Feb 23 18:33:45 crc kubenswrapper[4768]: E0223 18:33:45.735273 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:33:45 crc kubenswrapper[4768]: I0223 18:33:45.739209 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:45 crc kubenswrapper[4768]: I0223 18:33:45.813555 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:45 crc kubenswrapper[4768]: I0223 18:33:45.869787 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:33:46 crc kubenswrapper[4768]: I0223 18:33:46.230989 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:46 crc kubenswrapper[4768]: I0223 18:33:46.232876 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:46 crc kubenswrapper[4768]: I0223 18:33:46.232960 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:46 crc kubenswrapper[4768]: I0223 18:33:46.232981 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:46 crc kubenswrapper[4768]: I0223 18:33:46.233031 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 18:33:46 crc kubenswrapper[4768]: E0223 18:33:46.235451 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:46Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 18:33:46 crc kubenswrapper[4768]: E0223 18:33:46.237939 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:46Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 18:33:46 crc kubenswrapper[4768]: I0223 18:33:46.240676 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:46Z is after 2026-02-23T05:33:13Z Feb 23 18:33:46 crc kubenswrapper[4768]: I0223 18:33:46.252807 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:52:06.622727688 +0000 UTC Feb 23 18:33:46 crc kubenswrapper[4768]: I0223 18:33:46.470938 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:46 crc kubenswrapper[4768]: I0223 18:33:46.472176 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:46 crc kubenswrapper[4768]: I0223 18:33:46.472275 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:46 crc kubenswrapper[4768]: I0223 18:33:46.472342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:46 crc kubenswrapper[4768]: I0223 18:33:46.473341 4768 scope.go:117] "RemoveContainer" containerID="99eda65037ea744d1e2ef35a1f4903ddfdcc055c4624f5885245abe8a8cfbda5" Feb 23 18:33:46 crc kubenswrapper[4768]: E0223 18:33:46.473756 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:33:47 crc kubenswrapper[4768]: I0223 18:33:47.240957 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:47Z is after 2026-02-23T05:33:13Z Feb 23 18:33:47 crc kubenswrapper[4768]: I0223 18:33:47.253082 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:44:11.068857687 +0000 UTC Feb 23 18:33:47 crc kubenswrapper[4768]: I0223 18:33:47.474370 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:47 crc kubenswrapper[4768]: I0223 18:33:47.475565 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:47 crc kubenswrapper[4768]: I0223 18:33:47.475633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:47 crc kubenswrapper[4768]: I0223 18:33:47.475654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:47 crc kubenswrapper[4768]: I0223 18:33:47.476725 4768 scope.go:117] "RemoveContainer" containerID="99eda65037ea744d1e2ef35a1f4903ddfdcc055c4624f5885245abe8a8cfbda5" Feb 23 18:33:47 crc kubenswrapper[4768]: E0223 18:33:47.477053 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:33:47 crc kubenswrapper[4768]: I0223 18:33:47.870472 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 18:33:47 crc kubenswrapper[4768]: W0223 18:33:47.874700 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:47Z is after 2026-02-23T05:33:13Z Feb 23 18:33:47 crc kubenswrapper[4768]: E0223 18:33:47.874826 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:47Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:33:47 crc kubenswrapper[4768]: E0223 18:33:47.876739 4768 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:47Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:33:48 crc kubenswrapper[4768]: I0223 18:33:48.240963 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:48Z is after 2026-02-23T05:33:13Z Feb 23 18:33:48 crc kubenswrapper[4768]: I0223 18:33:48.254113 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 08:05:11.712956875 +0000 UTC Feb 23 18:33:49 crc kubenswrapper[4768]: I0223 18:33:49.239794 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:49Z is after 2026-02-23T05:33:13Z Feb 23 18:33:49 crc kubenswrapper[4768]: I0223 18:33:49.254831 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:28:10.144413957 +0000 UTC Feb 23 18:33:49 crc kubenswrapper[4768]: E0223 18:33:49.840406 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:49Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896f3d6516cac9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 18:33:25.231496346 +0000 UTC m=+0.621982186,LastTimestamp:2026-02-23 18:33:25.231496346 +0000 UTC m=+0.621982186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 18:33:50 crc kubenswrapper[4768]: I0223 18:33:50.239639 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:50Z is after 2026-02-23T05:33:13Z Feb 23 18:33:50 crc kubenswrapper[4768]: I0223 18:33:50.255789 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 06:36:17.90444314 +0000 UTC Feb 23 18:33:50 crc kubenswrapper[4768]: I0223 18:33:50.969597 4768 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 18:33:50 crc kubenswrapper[4768]: I0223 18:33:50.969681 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 18:33:50 crc kubenswrapper[4768]: I0223 18:33:50.969755 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:50 crc kubenswrapper[4768]: I0223 18:33:50.969935 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:50 crc kubenswrapper[4768]: I0223 18:33:50.971859 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:50 crc kubenswrapper[4768]: I0223 18:33:50.971917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:50 crc kubenswrapper[4768]: I0223 18:33:50.971939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:50 crc kubenswrapper[4768]: I0223 18:33:50.972664 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"cd9e5b89eb2f56f768b5231eb898f85016aa8d6894f1c03778b9aa62a7ba3bbc"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 23 18:33:50 crc kubenswrapper[4768]: I0223 18:33:50.972923 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://cd9e5b89eb2f56f768b5231eb898f85016aa8d6894f1c03778b9aa62a7ba3bbc" gracePeriod=30 Feb 23 18:33:51 crc kubenswrapper[4768]: I0223 18:33:51.240535 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:51Z is after 2026-02-23T05:33:13Z Feb 23 18:33:51 crc kubenswrapper[4768]: I0223 18:33:51.256624 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:49:37.197130963 +0000 UTC Feb 23 18:33:51 crc kubenswrapper[4768]: I0223 18:33:51.492525 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 18:33:51 crc kubenswrapper[4768]: I0223 18:33:51.493177 4768 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cd9e5b89eb2f56f768b5231eb898f85016aa8d6894f1c03778b9aa62a7ba3bbc" exitCode=255 Feb 23 18:33:51 crc kubenswrapper[4768]: I0223 18:33:51.493236 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cd9e5b89eb2f56f768b5231eb898f85016aa8d6894f1c03778b9aa62a7ba3bbc"} Feb 23 18:33:51 crc kubenswrapper[4768]: I0223 18:33:51.493320 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed"} Feb 23 18:33:51 crc kubenswrapper[4768]: I0223 18:33:51.493458 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:51 crc kubenswrapper[4768]: I0223 18:33:51.497376 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:51 crc kubenswrapper[4768]: I0223 18:33:51.497439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:51 crc kubenswrapper[4768]: I0223 18:33:51.497457 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:51 crc kubenswrapper[4768]: W0223 18:33:51.865642 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:51Z is after 2026-02-23T05:33:13Z Feb 23 18:33:51 crc kubenswrapper[4768]: E0223 18:33:51.865758 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:33:52 crc kubenswrapper[4768]: I0223 18:33:52.237897 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:52Z is after 2026-02-23T05:33:13Z Feb 23 18:33:52 crc kubenswrapper[4768]: I0223 18:33:52.257098 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:51:10.093777268 +0000 UTC Feb 23 18:33:52 crc kubenswrapper[4768]: W0223 18:33:52.385140 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:52Z is after 2026-02-23T05:33:13Z Feb 23 18:33:52 crc kubenswrapper[4768]: E0223 18:33:52.385306 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:52Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:33:53 crc kubenswrapper[4768]: I0223 18:33:53.238290 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:53 crc kubenswrapper[4768]: I0223 18:33:53.238308 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:53Z is after 2026-02-23T05:33:13Z Feb 23 18:33:53 crc kubenswrapper[4768]: E0223 18:33:53.239951 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:53Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 18:33:53 crc kubenswrapper[4768]: I0223 18:33:53.242405 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:53 crc kubenswrapper[4768]: I0223 18:33:53.242448 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:53 crc kubenswrapper[4768]: I0223 18:33:53.242466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:53 crc kubenswrapper[4768]: I0223 18:33:53.242508 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 18:33:53 crc kubenswrapper[4768]: E0223 18:33:53.247584 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:53Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 18:33:53 crc kubenswrapper[4768]: I0223 18:33:53.257792 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:50:11.626058408 +0000 UTC Feb 23 18:33:54 crc kubenswrapper[4768]: I0223 18:33:54.240460 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:54Z is after 2026-02-23T05:33:13Z Feb 23 18:33:54 crc kubenswrapper[4768]: I0223 18:33:54.258707 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 17:12:33.962501052 +0000 UTC Feb 23 18:33:55 crc kubenswrapper[4768]: I0223 18:33:55.239285 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:55Z is after 2026-02-23T05:33:13Z Feb 23 18:33:55 crc kubenswrapper[4768]: I0223 18:33:55.259409 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 19:39:08.918961921 +0000 UTC Feb 23 18:33:55 crc kubenswrapper[4768]: E0223 18:33:55.396596 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 18:33:56 crc kubenswrapper[4768]: I0223 18:33:56.238036 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:56Z is after 2026-02-23T05:33:13Z Feb 23 18:33:56 crc kubenswrapper[4768]: I0223 18:33:56.260365 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 18:01:51.955018716 +0000 UTC Feb 23 18:33:56 crc kubenswrapper[4768]: W0223 18:33:56.848911 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:56Z is after 2026-02-23T05:33:13Z Feb 23 18:33:56 crc kubenswrapper[4768]: E0223 18:33:56.849058 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:33:57 crc kubenswrapper[4768]: I0223 18:33:57.240318 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:57Z is after 2026-02-23T05:33:13Z Feb 23 18:33:57 crc kubenswrapper[4768]: I0223 18:33:57.261433 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 04:10:27.58058114 +0000 UTC Feb 23 18:33:57 crc kubenswrapper[4768]: I0223 18:33:57.969713 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:33:57 crc kubenswrapper[4768]: I0223 18:33:57.969975 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:57 crc kubenswrapper[4768]: I0223 18:33:57.974352 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:57 crc kubenswrapper[4768]: I0223 18:33:57.974412 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:57 crc kubenswrapper[4768]: I0223 18:33:57.974423 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:58 crc kubenswrapper[4768]: I0223 18:33:58.241334 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:58Z is after 2026-02-23T05:33:13Z Feb 23 18:33:58 crc kubenswrapper[4768]: I0223 18:33:58.262462 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:02:18.993289205 +0000 UTC Feb 23 18:33:59 crc kubenswrapper[4768]: I0223 18:33:59.239291 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:59Z is after 2026-02-23T05:33:13Z Feb 23 18:33:59 crc kubenswrapper[4768]: I0223 18:33:59.263558 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:03:46.783149301 +0000 UTC Feb 23 18:33:59 crc kubenswrapper[4768]: I0223 18:33:59.306716 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:33:59 crc kubenswrapper[4768]: I0223 18:33:59.308513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:33:59 crc kubenswrapper[4768]: I0223 18:33:59.308713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:33:59 crc kubenswrapper[4768]: I0223 18:33:59.308854 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:33:59 crc kubenswrapper[4768]: I0223 18:33:59.309760 4768 scope.go:117] "RemoveContainer" containerID="99eda65037ea744d1e2ef35a1f4903ddfdcc055c4624f5885245abe8a8cfbda5" Feb 23 18:33:59 crc kubenswrapper[4768]: E0223 18:33:59.846178 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:33:59Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896f3d6516cac9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 18:33:25.231496346 +0000 UTC m=+0.621982186,LastTimestamp:2026-02-23 18:33:25.231496346 +0000 UTC m=+0.621982186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.023437 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.023676 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.025375 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.025438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.025449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.238103 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:00Z is after 2026-02-23T05:33:13Z Feb 23 18:34:00 crc kubenswrapper[4768]: E0223 18:34:00.244043 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:00Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.248333 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.250316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.250355 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.250366 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.250405 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 18:34:00 crc kubenswrapper[4768]: E0223 18:34:00.256790 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:00Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.263761 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:48:10.093837496 +0000 UTC Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.532870 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.533688 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.536283 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="25925643c251beca5075b2e7517f17137c1f370ef0a773972f42e58f877b37cc" exitCode=255 Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.536348 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"25925643c251beca5075b2e7517f17137c1f370ef0a773972f42e58f877b37cc"} Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.536413 4768 scope.go:117] "RemoveContainer" containerID="99eda65037ea744d1e2ef35a1f4903ddfdcc055c4624f5885245abe8a8cfbda5" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.536714 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.538304 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.538816 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.539738 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.540879 4768 scope.go:117] "RemoveContainer" containerID="25925643c251beca5075b2e7517f17137c1f370ef0a773972f42e58f877b37cc" Feb 23 18:34:00 crc kubenswrapper[4768]: E0223 18:34:00.541468 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.970094 4768 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 18:34:00 crc kubenswrapper[4768]: I0223 18:34:00.970305 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 18:34:01 crc kubenswrapper[4768]: I0223 18:34:01.238089 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:01Z is after 2026-02-23T05:33:13Z Feb 23 18:34:01 crc kubenswrapper[4768]: I0223 18:34:01.264423 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 09:25:11.239539419 +0000 UTC Feb 23 18:34:01 crc kubenswrapper[4768]: I0223 18:34:01.542096 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 18:34:02 crc kubenswrapper[4768]: I0223 18:34:02.239951 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:02Z is after 2026-02-23T05:33:13Z Feb 23 18:34:02 crc kubenswrapper[4768]: I0223 18:34:02.265674 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:14:09.433251152 +0000 UTC Feb 23 18:34:02 crc kubenswrapper[4768]: W0223 18:34:02.592837 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:02Z is after 2026-02-23T05:33:13Z Feb 23 18:34:02 crc kubenswrapper[4768]: E0223 18:34:02.593488 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:02Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:34:03 crc kubenswrapper[4768]: I0223 18:34:03.238327 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:03Z is after 2026-02-23T05:33:13Z Feb 23 18:34:03 crc kubenswrapper[4768]: I0223 18:34:03.266000 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:17:39.349758272 +0000 UTC Feb 23 18:34:04 crc kubenswrapper[4768]: I0223 18:34:04.239658 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:04Z is after 2026-02-23T05:33:13Z Feb 23 18:34:04 crc kubenswrapper[4768]: I0223 18:34:04.267116 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:22:13.437102155 +0000 UTC Feb 23 18:34:05 crc kubenswrapper[4768]: I0223 18:34:05.065627 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 18:34:05 crc kubenswrapper[4768]: E0223 18:34:05.069491 4768 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:05Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:34:05 crc kubenswrapper[4768]: E0223 18:34:05.070771 4768 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Feb 23 18:34:05 crc kubenswrapper[4768]: I0223 18:34:05.239780 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:05Z is after 2026-02-23T05:33:13Z Feb 23 18:34:05 crc kubenswrapper[4768]: I0223 18:34:05.268061 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:56:01.702964273 +0000 UTC Feb 23 18:34:05 crc kubenswrapper[4768]: E0223 18:34:05.396845 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 18:34:05 crc kubenswrapper[4768]: I0223 18:34:05.813908 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:34:05 crc kubenswrapper[4768]: I0223 18:34:05.814167 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:05 crc kubenswrapper[4768]: I0223 18:34:05.815723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:05 crc kubenswrapper[4768]: I0223 18:34:05.815775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:05 crc kubenswrapper[4768]: I0223 18:34:05.815792 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:05 crc kubenswrapper[4768]: I0223 18:34:05.816571 4768 scope.go:117] "RemoveContainer" containerID="25925643c251beca5075b2e7517f17137c1f370ef0a773972f42e58f877b37cc" Feb 23 18:34:05 crc kubenswrapper[4768]: E0223 18:34:05.816844 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:34:05 crc kubenswrapper[4768]: I0223 18:34:05.869479 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:34:06 crc kubenswrapper[4768]: W0223 18:34:06.197285 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:06Z is after 2026-02-23T05:33:13Z Feb 23 18:34:06 crc kubenswrapper[4768]: E0223 18:34:06.197382 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:06Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:34:06 crc kubenswrapper[4768]: I0223 18:34:06.238473 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:06Z is after 2026-02-23T05:33:13Z Feb 23 18:34:06 crc kubenswrapper[4768]: I0223 18:34:06.269165 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 13:18:06.519444267 +0000 UTC Feb 23 18:34:06 crc kubenswrapper[4768]: I0223 18:34:06.564337 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:06 crc kubenswrapper[4768]: I0223 18:34:06.566116 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:06 crc kubenswrapper[4768]: I0223 18:34:06.566191 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:06 crc kubenswrapper[4768]: I0223 18:34:06.566211 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:06 crc kubenswrapper[4768]: I0223 18:34:06.567218 4768 scope.go:117] "RemoveContainer" containerID="25925643c251beca5075b2e7517f17137c1f370ef0a773972f42e58f877b37cc" Feb 23 18:34:06 crc kubenswrapper[4768]: E0223 18:34:06.567546 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:34:07 crc kubenswrapper[4768]: I0223 18:34:07.240115 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:07Z is after 2026-02-23T05:33:13Z Feb 23 18:34:07 crc kubenswrapper[4768]: E0223 18:34:07.250328 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:07Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 23 18:34:07 crc kubenswrapper[4768]: I0223 18:34:07.257230 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:07 crc kubenswrapper[4768]: I0223 18:34:07.259120 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:07 crc kubenswrapper[4768]: I0223 18:34:07.259318 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:07 crc kubenswrapper[4768]: I0223 18:34:07.259429 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:07 crc kubenswrapper[4768]: I0223 18:34:07.259551 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 18:34:07 crc kubenswrapper[4768]: E0223 18:34:07.264414 4768 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:07Z is after 2026-02-23T05:33:13Z" node="crc" Feb 23 18:34:07 crc kubenswrapper[4768]: I0223 18:34:07.269718 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 02:07:39.836351467 +0000 UTC Feb 23 18:34:08 crc kubenswrapper[4768]: I0223 18:34:08.240351 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:08Z is after 2026-02-23T05:33:13Z Feb 23 18:34:08 crc kubenswrapper[4768]: I0223 18:34:08.270937 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:37:28.968810291 +0000 UTC Feb 23 18:34:09 crc kubenswrapper[4768]: I0223 18:34:09.240066 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:09Z is after 2026-02-23T05:33:13Z Feb 23 18:34:09 crc kubenswrapper[4768]: I0223 18:34:09.272047 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 15:06:11.870964547 +0000 UTC Feb 23 18:34:09 crc kubenswrapper[4768]: E0223 18:34:09.850828 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:09Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.1896f3d6516cac9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 18:33:25.231496346 +0000 UTC m=+0.621982186,LastTimestamp:2026-02-23 18:33:25.231496346 +0000 UTC m=+0.621982186,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 18:34:10 crc kubenswrapper[4768]: I0223 18:34:10.239411 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:10Z is after 2026-02-23T05:33:13Z Feb 23 18:34:10 crc kubenswrapper[4768]: I0223 18:34:10.272993 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:15:14.260463202 +0000 UTC Feb 23 18:34:10 crc kubenswrapper[4768]: W0223 18:34:10.348412 4768 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:10Z is after 2026-02-23T05:33:13Z Feb 23 18:34:10 crc kubenswrapper[4768]: E0223 18:34:10.348578 4768 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:10Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 23 18:34:10 crc kubenswrapper[4768]: I0223 18:34:10.970565 4768 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 18:34:10 crc kubenswrapper[4768]: I0223 18:34:10.970714 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 18:34:11 crc kubenswrapper[4768]: I0223 18:34:11.239801 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:11Z is after 2026-02-23T05:33:13Z Feb 23 18:34:11 crc kubenswrapper[4768]: I0223 18:34:11.273919 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:12:19.991382956 +0000 UTC Feb 23 18:34:12 crc kubenswrapper[4768]: I0223 18:34:12.240214 4768 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:12Z is after 2026-02-23T05:33:13Z Feb 23 18:34:12 crc kubenswrapper[4768]: I0223 18:34:12.274738 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:11:03.117305075 +0000 UTC Feb 23 18:34:13 crc kubenswrapper[4768]: I0223 18:34:13.068590 4768 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 23 18:34:13 crc kubenswrapper[4768]: I0223 18:34:13.274850 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 07:04:13.185660344 +0000 UTC Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.265584 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.267291 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.267347 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.267365 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.267538 4768 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.275553 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:29:06.22272544 +0000 UTC Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.282494 4768 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.282966 4768 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 23 18:34:14 crc kubenswrapper[4768]: E0223 18:34:14.283112 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.287320 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.287374 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.287392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.287420 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.287478 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:14Z","lastTransitionTime":"2026-02-23T18:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:14 crc kubenswrapper[4768]: E0223 18:34:14.306708 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.317533 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.317842 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.318056 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.318311 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.318536 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:14Z","lastTransitionTime":"2026-02-23T18:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:14 crc kubenswrapper[4768]: E0223 18:34:14.334188 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.353139 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.353507 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.353659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.353809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.353957 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:14Z","lastTransitionTime":"2026-02-23T18:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:14 crc kubenswrapper[4768]: E0223 18:34:14.369413 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.379819 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.379902 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.379929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.379963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:14 crc kubenswrapper[4768]: I0223 18:34:14.379993 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:14Z","lastTransitionTime":"2026-02-23T18:34:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:14 crc kubenswrapper[4768]: E0223 18:34:14.400198 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:14 crc kubenswrapper[4768]: E0223 18:34:14.400486 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 18:34:14 crc kubenswrapper[4768]: E0223 18:34:14.400530 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:14 crc kubenswrapper[4768]: E0223 18:34:14.501667 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:14 crc kubenswrapper[4768]: E0223 18:34:14.602890 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:14 crc kubenswrapper[4768]: E0223 18:34:14.703901 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:14 crc kubenswrapper[4768]: E0223 18:34:14.804534 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:14 crc kubenswrapper[4768]: E0223 18:34:14.905549 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:15 crc kubenswrapper[4768]: E0223 18:34:15.006285 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:15 crc kubenswrapper[4768]: E0223 18:34:15.106740 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:15 crc kubenswrapper[4768]: E0223 18:34:15.207051 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:15 crc kubenswrapper[4768]: I0223 18:34:15.277333 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:08:44.754524108 +0000 UTC Feb 23 18:34:15 crc kubenswrapper[4768]: E0223 18:34:15.308054 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:15 crc kubenswrapper[4768]: E0223 18:34:15.397158 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 18:34:15 crc kubenswrapper[4768]: E0223 18:34:15.408513 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:15 crc kubenswrapper[4768]: E0223 18:34:15.509427 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:15 crc kubenswrapper[4768]: E0223 18:34:15.610155 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:15 crc kubenswrapper[4768]: E0223 18:34:15.710504 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:15 crc kubenswrapper[4768]: E0223 18:34:15.811564 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:15 crc kubenswrapper[4768]: E0223 18:34:15.911939 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:16 crc kubenswrapper[4768]: E0223 18:34:16.012772 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:16 crc kubenswrapper[4768]: E0223 18:34:16.112904 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:16 crc kubenswrapper[4768]: E0223 18:34:16.213682 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:16 crc kubenswrapper[4768]: I0223 18:34:16.278478 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:15:42.799908802 +0000 UTC Feb 23 18:34:16 crc kubenswrapper[4768]: E0223 18:34:16.314340 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:16 crc kubenswrapper[4768]: E0223 18:34:16.414500 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:16 crc kubenswrapper[4768]: E0223 18:34:16.515665 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:16 crc kubenswrapper[4768]: E0223 18:34:16.616396 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:16 crc kubenswrapper[4768]: E0223 18:34:16.717343 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:16 crc kubenswrapper[4768]: E0223 18:34:16.818236 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:16 crc kubenswrapper[4768]: E0223 18:34:16.919548 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:17 crc kubenswrapper[4768]: E0223 18:34:17.020501 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:17 crc kubenswrapper[4768]: E0223 18:34:17.121343 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:17 crc kubenswrapper[4768]: E0223 18:34:17.222183 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:17 crc kubenswrapper[4768]: I0223 18:34:17.279468 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 18:12:15.806150555 +0000 UTC Feb 23 18:34:17 crc kubenswrapper[4768]: E0223 18:34:17.323373 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:17 crc kubenswrapper[4768]: E0223 18:34:17.424698 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:17 crc kubenswrapper[4768]: E0223 18:34:17.526058 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:17 crc kubenswrapper[4768]: E0223 18:34:17.626224 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:17 crc kubenswrapper[4768]: E0223 18:34:17.726357 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:17 crc kubenswrapper[4768]: E0223 18:34:17.827345 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:17 crc kubenswrapper[4768]: E0223 18:34:17.928217 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:18 crc kubenswrapper[4768]: E0223 18:34:18.028937 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:18 crc kubenswrapper[4768]: E0223 18:34:18.129347 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:18 crc kubenswrapper[4768]: I0223 18:34:18.211200 4768 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 18:34:18 crc kubenswrapper[4768]: E0223 18:34:18.229475 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:18 crc kubenswrapper[4768]: I0223 18:34:18.280715 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:12:04.913653597 +0000 UTC Feb 23 18:34:18 crc kubenswrapper[4768]: I0223 18:34:18.306845 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:18 crc kubenswrapper[4768]: I0223 18:34:18.308361 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:18 crc kubenswrapper[4768]: I0223 18:34:18.308423 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:18 crc kubenswrapper[4768]: I0223 18:34:18.308447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:18 crc kubenswrapper[4768]: I0223 18:34:18.309640 4768 scope.go:117] "RemoveContainer" containerID="25925643c251beca5075b2e7517f17137c1f370ef0a773972f42e58f877b37cc" Feb 23 18:34:18 crc kubenswrapper[4768]: E0223 18:34:18.309982 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:34:18 crc kubenswrapper[4768]: E0223 18:34:18.330424 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:18 crc kubenswrapper[4768]: E0223 18:34:18.430525 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:18 crc kubenswrapper[4768]: E0223 18:34:18.530880 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:18 crc kubenswrapper[4768]: E0223 18:34:18.631755 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:18 crc kubenswrapper[4768]: E0223 18:34:18.732152 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:18 crc kubenswrapper[4768]: E0223 18:34:18.833308 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:18 crc kubenswrapper[4768]: E0223 18:34:18.933812 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:19 crc kubenswrapper[4768]: E0223 18:34:19.034646 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:19 crc kubenswrapper[4768]: E0223 18:34:19.135038 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:19 crc kubenswrapper[4768]: E0223 18:34:19.235690 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:19 crc kubenswrapper[4768]: I0223 18:34:19.281558 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:39:26.112818505 +0000 UTC Feb 23 18:34:19 crc kubenswrapper[4768]: E0223 18:34:19.336361 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:19 crc kubenswrapper[4768]: E0223 18:34:19.436454 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:19 crc kubenswrapper[4768]: E0223 18:34:19.536843 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:19 crc kubenswrapper[4768]: E0223 18:34:19.637705 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:19 crc kubenswrapper[4768]: E0223 18:34:19.738371 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:19 crc kubenswrapper[4768]: E0223 18:34:19.839237 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:19 crc kubenswrapper[4768]: E0223 18:34:19.939971 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:20 crc kubenswrapper[4768]: E0223 18:34:20.040948 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:20 crc kubenswrapper[4768]: E0223 18:34:20.141858 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:20 crc kubenswrapper[4768]: E0223 18:34:20.242571 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:20 crc kubenswrapper[4768]: I0223 18:34:20.282234 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:43:17.458621931 +0000 UTC Feb 23 18:34:20 crc kubenswrapper[4768]: E0223 18:34:20.343894 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:20 crc kubenswrapper[4768]: E0223 18:34:20.444315 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:20 crc kubenswrapper[4768]: E0223 18:34:20.545048 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:20 crc kubenswrapper[4768]: E0223 18:34:20.645943 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:20 crc kubenswrapper[4768]: E0223 18:34:20.746527 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:20 crc kubenswrapper[4768]: E0223 18:34:20.847356 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:20 crc kubenswrapper[4768]: E0223 18:34:20.948525 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:20 crc kubenswrapper[4768]: I0223 18:34:20.969835 4768 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 23 18:34:20 crc kubenswrapper[4768]: I0223 18:34:20.969921 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 23 18:34:20 crc kubenswrapper[4768]: I0223 18:34:20.969991 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:34:20 crc kubenswrapper[4768]: I0223 18:34:20.970231 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:20 crc kubenswrapper[4768]: I0223 18:34:20.975288 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:20 crc kubenswrapper[4768]: I0223 18:34:20.975358 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:20 crc kubenswrapper[4768]: I0223 18:34:20.975380 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:20 crc kubenswrapper[4768]: I0223 18:34:20.976207 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 23 18:34:20 crc kubenswrapper[4768]: I0223 18:34:20.976468 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed" gracePeriod=30 Feb 23 18:34:21 crc kubenswrapper[4768]: E0223 18:34:21.049584 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:21 crc kubenswrapper[4768]: E0223 18:34:21.150401 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:21 crc kubenswrapper[4768]: E0223 18:34:21.250898 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.283641 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 07:53:43.314634977 +0000 UTC Feb 23 18:34:21 crc kubenswrapper[4768]: E0223 18:34:21.351648 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:21 crc kubenswrapper[4768]: E0223 18:34:21.452459 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:21 crc kubenswrapper[4768]: E0223 18:34:21.553562 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.607873 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.609958 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.610635 4768 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed" exitCode=255 Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.610708 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed"} Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.610764 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163"} Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.610794 4768 scope.go:117] "RemoveContainer" containerID="cd9e5b89eb2f56f768b5231eb898f85016aa8d6894f1c03778b9aa62a7ba3bbc" Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.610946 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.612187 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.612278 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.612306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:21 crc kubenswrapper[4768]: E0223 18:34:21.654307 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:21 crc kubenswrapper[4768]: E0223 18:34:21.755348 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:21 crc kubenswrapper[4768]: E0223 18:34:21.856120 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:21 crc kubenswrapper[4768]: E0223 18:34:21.956282 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.957570 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.957722 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.959474 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.959526 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:21 crc kubenswrapper[4768]: I0223 18:34:21.959546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:22 crc kubenswrapper[4768]: E0223 18:34:22.057068 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:22 crc kubenswrapper[4768]: E0223 18:34:22.157881 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:22 crc kubenswrapper[4768]: E0223 18:34:22.258343 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:22 crc kubenswrapper[4768]: I0223 18:34:22.284749 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 09:11:04.803509073 +0000 UTC Feb 23 18:34:22 crc kubenswrapper[4768]: E0223 18:34:22.358919 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:22 crc kubenswrapper[4768]: E0223 18:34:22.459952 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:22 crc kubenswrapper[4768]: E0223 18:34:22.561054 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:22 crc kubenswrapper[4768]: I0223 18:34:22.619455 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 18:34:22 crc kubenswrapper[4768]: I0223 18:34:22.621234 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:22 crc kubenswrapper[4768]: I0223 18:34:22.622633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:22 crc kubenswrapper[4768]: I0223 18:34:22.622709 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:22 crc kubenswrapper[4768]: I0223 18:34:22.622729 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:22 crc kubenswrapper[4768]: E0223 18:34:22.661399 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:22 crc kubenswrapper[4768]: E0223 18:34:22.762488 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:22 crc kubenswrapper[4768]: E0223 18:34:22.863558 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:22 crc kubenswrapper[4768]: E0223 18:34:22.964510 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:23 crc kubenswrapper[4768]: E0223 18:34:23.065112 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:23 crc kubenswrapper[4768]: E0223 18:34:23.165825 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:23 crc kubenswrapper[4768]: E0223 18:34:23.266785 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:23 crc kubenswrapper[4768]: I0223 18:34:23.285341 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:32:36.006928104 +0000 UTC Feb 23 18:34:23 crc kubenswrapper[4768]: E0223 18:34:23.367346 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:23 crc kubenswrapper[4768]: E0223 18:34:23.468276 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:23 crc kubenswrapper[4768]: E0223 18:34:23.569106 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:23 crc kubenswrapper[4768]: E0223 18:34:23.669823 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:23 crc kubenswrapper[4768]: E0223 18:34:23.770484 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:23 crc kubenswrapper[4768]: E0223 18:34:23.871337 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:23 crc kubenswrapper[4768]: E0223 18:34:23.972343 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.073145 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.173393 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.273792 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.286355 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:10:51.406069852 +0000 UTC Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.374637 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.475837 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.523165 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.528848 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.528884 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.528892 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.528910 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.528922 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:24Z","lastTransitionTime":"2026-02-23T18:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.539885 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.543995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.544051 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.544065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.544086 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.544101 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:24Z","lastTransitionTime":"2026-02-23T18:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.559753 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.563850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.563903 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.563921 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.563940 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.563953 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:24Z","lastTransitionTime":"2026-02-23T18:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.574716 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.580090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.580139 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.580153 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.580173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:24 crc kubenswrapper[4768]: I0223 18:34:24.580185 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:24Z","lastTransitionTime":"2026-02-23T18:34:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.590888 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.591007 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.591045 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.691941 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.792755 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.893538 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:24 crc kubenswrapper[4768]: E0223 18:34:24.994600 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:25 crc kubenswrapper[4768]: E0223 18:34:25.095339 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:25 crc kubenswrapper[4768]: E0223 18:34:25.195656 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:25 crc kubenswrapper[4768]: I0223 18:34:25.287307 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 23:10:45.508855339 +0000 UTC Feb 23 18:34:25 crc kubenswrapper[4768]: E0223 18:34:25.296729 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:25 crc kubenswrapper[4768]: E0223 18:34:25.396890 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:25 crc kubenswrapper[4768]: E0223 18:34:25.397225 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 18:34:25 crc kubenswrapper[4768]: E0223 18:34:25.497003 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:25 crc kubenswrapper[4768]: E0223 18:34:25.597960 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:25 crc kubenswrapper[4768]: E0223 18:34:25.698064 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:25 crc kubenswrapper[4768]: E0223 18:34:25.798353 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:25 crc kubenswrapper[4768]: E0223 18:34:25.898838 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:25 crc kubenswrapper[4768]: E0223 18:34:25.999714 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:26 crc kubenswrapper[4768]: E0223 18:34:26.100872 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:26 crc kubenswrapper[4768]: E0223 18:34:26.201952 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:26 crc kubenswrapper[4768]: I0223 18:34:26.288089 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:50:53.529380778 +0000 UTC Feb 23 18:34:26 crc kubenswrapper[4768]: E0223 18:34:26.302347 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:26 crc kubenswrapper[4768]: E0223 18:34:26.403484 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:26 crc kubenswrapper[4768]: E0223 18:34:26.504548 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:26 crc kubenswrapper[4768]: E0223 18:34:26.605526 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:26 crc kubenswrapper[4768]: E0223 18:34:26.706680 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:26 crc kubenswrapper[4768]: E0223 18:34:26.808063 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:26 crc kubenswrapper[4768]: E0223 18:34:26.908890 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:27 crc kubenswrapper[4768]: E0223 18:34:27.009774 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:27 crc kubenswrapper[4768]: E0223 18:34:27.110905 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:27 crc kubenswrapper[4768]: E0223 18:34:27.211737 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:27 crc kubenswrapper[4768]: I0223 18:34:27.289064 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:27:16.497441061 +0000 UTC Feb 23 18:34:27 crc kubenswrapper[4768]: E0223 18:34:27.312468 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:27 crc kubenswrapper[4768]: E0223 18:34:27.413419 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:27 crc kubenswrapper[4768]: E0223 18:34:27.514213 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:27 crc kubenswrapper[4768]: E0223 18:34:27.614959 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:27 crc kubenswrapper[4768]: E0223 18:34:27.715416 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:27 crc kubenswrapper[4768]: E0223 18:34:27.816507 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:27 crc kubenswrapper[4768]: E0223 18:34:27.917386 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:27 crc kubenswrapper[4768]: I0223 18:34:27.969644 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:34:27 crc kubenswrapper[4768]: I0223 18:34:27.969815 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:27 crc kubenswrapper[4768]: I0223 18:34:27.973987 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:27 crc kubenswrapper[4768]: I0223 18:34:27.974139 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:27 crc kubenswrapper[4768]: I0223 18:34:27.974242 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:27 crc kubenswrapper[4768]: I0223 18:34:27.977273 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:34:28 crc kubenswrapper[4768]: E0223 18:34:28.018019 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:28 crc kubenswrapper[4768]: E0223 18:34:28.118878 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:28 crc kubenswrapper[4768]: E0223 18:34:28.220039 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:28 crc kubenswrapper[4768]: I0223 18:34:28.289755 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:09:58.068076488 +0000 UTC Feb 23 18:34:28 crc kubenswrapper[4768]: E0223 18:34:28.320346 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:28 crc kubenswrapper[4768]: E0223 18:34:28.420876 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:28 crc kubenswrapper[4768]: E0223 18:34:28.521639 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:28 crc kubenswrapper[4768]: E0223 18:34:28.622115 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:28 crc kubenswrapper[4768]: I0223 18:34:28.635592 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:28 crc kubenswrapper[4768]: I0223 18:34:28.635647 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:34:28 crc kubenswrapper[4768]: I0223 18:34:28.636574 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:28 crc kubenswrapper[4768]: I0223 18:34:28.636618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:28 crc kubenswrapper[4768]: I0223 18:34:28.636635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:28 crc kubenswrapper[4768]: E0223 18:34:28.722714 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:28 crc kubenswrapper[4768]: E0223 18:34:28.823817 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:28 crc kubenswrapper[4768]: E0223 18:34:28.924765 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:29 crc kubenswrapper[4768]: E0223 18:34:29.025543 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:29 crc kubenswrapper[4768]: E0223 18:34:29.126486 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:29 crc kubenswrapper[4768]: E0223 18:34:29.227629 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:29 crc kubenswrapper[4768]: I0223 18:34:29.290069 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:25:25.525931258 +0000 UTC Feb 23 18:34:29 crc kubenswrapper[4768]: E0223 18:34:29.328519 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:29 crc kubenswrapper[4768]: E0223 18:34:29.429686 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:29 crc kubenswrapper[4768]: E0223 18:34:29.530694 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:29 crc kubenswrapper[4768]: E0223 18:34:29.631426 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:29 crc kubenswrapper[4768]: I0223 18:34:29.637306 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:29 crc kubenswrapper[4768]: I0223 18:34:29.638306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:29 crc kubenswrapper[4768]: I0223 18:34:29.638348 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:29 crc kubenswrapper[4768]: I0223 18:34:29.638367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:29 crc kubenswrapper[4768]: E0223 18:34:29.731506 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:29 crc kubenswrapper[4768]: E0223 18:34:29.832588 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:29 crc kubenswrapper[4768]: E0223 18:34:29.933428 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:30 crc kubenswrapper[4768]: E0223 18:34:30.034453 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:30 crc kubenswrapper[4768]: E0223 18:34:30.135523 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:30 crc kubenswrapper[4768]: E0223 18:34:30.235824 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:30 crc kubenswrapper[4768]: I0223 18:34:30.291642 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 05:47:01.544804259 +0000 UTC Feb 23 18:34:30 crc kubenswrapper[4768]: E0223 18:34:30.336957 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:30 crc kubenswrapper[4768]: E0223 18:34:30.438047 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:30 crc kubenswrapper[4768]: E0223 18:34:30.538364 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:30 crc kubenswrapper[4768]: E0223 18:34:30.638557 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:30 crc kubenswrapper[4768]: E0223 18:34:30.739022 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:30 crc kubenswrapper[4768]: E0223 18:34:30.839961 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:30 crc kubenswrapper[4768]: E0223 18:34:30.940720 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:31 crc kubenswrapper[4768]: E0223 18:34:31.041724 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:31 crc kubenswrapper[4768]: E0223 18:34:31.142887 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:31 crc kubenswrapper[4768]: E0223 18:34:31.243916 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:31 crc kubenswrapper[4768]: I0223 18:34:31.292609 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 16:56:00.573123662 +0000 UTC Feb 23 18:34:31 crc kubenswrapper[4768]: I0223 18:34:31.306900 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:31 crc kubenswrapper[4768]: I0223 18:34:31.308063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:31 crc kubenswrapper[4768]: I0223 18:34:31.308130 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:31 crc kubenswrapper[4768]: I0223 18:34:31.308151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:31 crc kubenswrapper[4768]: I0223 18:34:31.309445 4768 scope.go:117] "RemoveContainer" containerID="25925643c251beca5075b2e7517f17137c1f370ef0a773972f42e58f877b37cc" Feb 23 18:34:31 crc kubenswrapper[4768]: E0223 18:34:31.344720 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:31 crc kubenswrapper[4768]: E0223 18:34:31.445456 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:31 crc kubenswrapper[4768]: E0223 18:34:31.546893 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:31 crc kubenswrapper[4768]: I0223 18:34:31.644318 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 18:34:31 crc kubenswrapper[4768]: I0223 18:34:31.646435 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6"} Feb 23 18:34:31 crc kubenswrapper[4768]: I0223 18:34:31.646591 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:31 crc kubenswrapper[4768]: E0223 18:34:31.647499 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:31 crc kubenswrapper[4768]: I0223 18:34:31.647869 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:31 crc kubenswrapper[4768]: I0223 18:34:31.647920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:31 crc kubenswrapper[4768]: I0223 18:34:31.647942 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:31 crc kubenswrapper[4768]: E0223 18:34:31.748458 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:31 crc kubenswrapper[4768]: E0223 18:34:31.849194 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:31 crc kubenswrapper[4768]: E0223 18:34:31.950004 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:32 crc kubenswrapper[4768]: E0223 18:34:32.050108 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:32 crc kubenswrapper[4768]: E0223 18:34:32.151224 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:32 crc kubenswrapper[4768]: E0223 18:34:32.252037 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:32 crc kubenswrapper[4768]: I0223 18:34:32.293416 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 03:44:15.264834433 +0000 UTC Feb 23 18:34:32 crc kubenswrapper[4768]: E0223 18:34:32.352901 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:32 crc kubenswrapper[4768]: E0223 18:34:32.453739 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:32 crc kubenswrapper[4768]: E0223 18:34:32.554260 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:32 crc kubenswrapper[4768]: I0223 18:34:32.650858 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 18:34:32 crc kubenswrapper[4768]: I0223 18:34:32.651348 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 23 18:34:32 crc kubenswrapper[4768]: I0223 18:34:32.653561 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6" exitCode=255 Feb 23 18:34:32 crc kubenswrapper[4768]: I0223 18:34:32.653619 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6"} Feb 23 18:34:32 crc kubenswrapper[4768]: I0223 18:34:32.653668 4768 scope.go:117] "RemoveContainer" containerID="25925643c251beca5075b2e7517f17137c1f370ef0a773972f42e58f877b37cc" Feb 23 18:34:32 crc kubenswrapper[4768]: I0223 18:34:32.653847 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:32 crc kubenswrapper[4768]: E0223 18:34:32.654523 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:32 crc kubenswrapper[4768]: I0223 18:34:32.655839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:32 crc kubenswrapper[4768]: I0223 18:34:32.655918 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:32 crc kubenswrapper[4768]: I0223 18:34:32.655949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:32 crc kubenswrapper[4768]: I0223 18:34:32.657146 4768 scope.go:117] "RemoveContainer" containerID="70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6" Feb 23 18:34:32 crc kubenswrapper[4768]: E0223 18:34:32.657547 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:34:32 crc kubenswrapper[4768]: E0223 18:34:32.755462 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:32 crc kubenswrapper[4768]: E0223 18:34:32.855729 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:32 crc kubenswrapper[4768]: E0223 18:34:32.956344 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:33 crc kubenswrapper[4768]: E0223 18:34:33.057123 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:33 crc kubenswrapper[4768]: E0223 18:34:33.157636 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:33 crc kubenswrapper[4768]: E0223 18:34:33.258503 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:33 crc kubenswrapper[4768]: I0223 18:34:33.294090 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 15:36:58.39841784 +0000 UTC Feb 23 18:34:33 crc kubenswrapper[4768]: E0223 18:34:33.358751 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:33 crc kubenswrapper[4768]: E0223 18:34:33.459610 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:33 crc kubenswrapper[4768]: E0223 18:34:33.560232 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:33 crc kubenswrapper[4768]: I0223 18:34:33.658850 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 18:34:33 crc kubenswrapper[4768]: E0223 18:34:33.660645 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:33 crc kubenswrapper[4768]: E0223 18:34:33.761386 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:33 crc kubenswrapper[4768]: E0223 18:34:33.861799 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:33 crc kubenswrapper[4768]: E0223 18:34:33.962594 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.063285 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.164353 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.264727 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.295080 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:32:53.243583806 +0000 UTC Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.364833 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.464935 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.565937 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.666117 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.767191 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.868027 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.913386 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.918337 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.918379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.918393 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.918414 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.918427 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:34Z","lastTransitionTime":"2026-02-23T18:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.930009 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.933774 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.933813 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.933827 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.933845 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.933860 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:34Z","lastTransitionTime":"2026-02-23T18:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.945722 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.949993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.950050 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.950070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.950091 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.950107 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:34Z","lastTransitionTime":"2026-02-23T18:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.960483 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.964439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.964478 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.964493 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.964513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:34 crc kubenswrapper[4768]: I0223 18:34:34.964527 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:34Z","lastTransitionTime":"2026-02-23T18:34:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.975470 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.975639 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 18:34:34 crc kubenswrapper[4768]: E0223 18:34:34.975678 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:35 crc kubenswrapper[4768]: E0223 18:34:35.076038 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:35 crc kubenswrapper[4768]: E0223 18:34:35.177151 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:35 crc kubenswrapper[4768]: E0223 18:34:35.277797 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.296267 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:06:34.918571311 +0000 UTC Feb 23 18:34:35 crc kubenswrapper[4768]: E0223 18:34:35.378707 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:35 crc kubenswrapper[4768]: E0223 18:34:35.397365 4768 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 23 18:34:35 crc kubenswrapper[4768]: E0223 18:34:35.479237 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:35 crc kubenswrapper[4768]: E0223 18:34:35.580334 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:35 crc kubenswrapper[4768]: E0223 18:34:35.681127 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:35 crc kubenswrapper[4768]: E0223 18:34:35.781785 4768 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.814055 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.814300 4768 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.815953 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.816015 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.816038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.817048 4768 scope.go:117] "RemoveContainer" containerID="70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6" Feb 23 18:34:35 crc kubenswrapper[4768]: E0223 18:34:35.817363 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.864240 4768 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.869103 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.884668 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.884707 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.884725 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.884744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.884762 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:35Z","lastTransitionTime":"2026-02-23T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.987469 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.987520 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.987531 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.987546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:35 crc kubenswrapper[4768]: I0223 18:34:35.987555 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:35Z","lastTransitionTime":"2026-02-23T18:34:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.089932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.089968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.089977 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.089991 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.090002 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:36Z","lastTransitionTime":"2026-02-23T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.193346 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.193397 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.193413 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.193434 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.193462 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:36Z","lastTransitionTime":"2026-02-23T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.255759 4768 apiserver.go:52] "Watching apiserver" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.260786 4768 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.260993 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.261475 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.261499 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.261548 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.261587 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.261679 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.261725 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.261853 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.261915 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.262015 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.263904 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.264229 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.264543 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.265365 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.266205 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.266536 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.266595 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.266597 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.266713 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.295265 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.295789 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.295822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.295831 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.295844 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.295855 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:36Z","lastTransitionTime":"2026-02-23T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.297022 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:11:24.785844662 +0000 UTC Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.307576 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.320653 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.331428 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.337651 4768 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.343003 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.357343 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.367728 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.398688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.398739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.398750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.398805 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.398820 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:36Z","lastTransitionTime":"2026-02-23T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419148 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419206 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419269 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419305 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419336 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419369 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419400 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419432 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419463 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419493 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419528 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419588 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419604 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419620 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419686 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419714 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419736 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419760 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419804 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419840 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419864 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419886 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419911 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419932 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.419987 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420013 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420047 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420059 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420075 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420147 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420151 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420212 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420239 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420279 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420304 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420328 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420351 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420358 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420374 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420397 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420422 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420449 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420465 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420475 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420532 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420571 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420609 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420648 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420693 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420699 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420743 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420769 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420788 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420805 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420811 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420821 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420839 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420859 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420876 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420894 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420921 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420940 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420958 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420960 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420977 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420985 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.420996 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421011 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421055 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421072 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421053 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421093 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421087 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421211 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421294 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421361 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421416 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421454 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421489 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421508 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421521 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421612 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421721 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421758 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421770 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421806 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421839 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421880 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421910 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421940 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421970 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421999 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422035 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422067 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422099 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422127 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422156 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422691 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422739 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422782 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422840 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422887 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422938 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422982 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423021 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423222 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423288 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423333 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423375 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423521 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423559 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423755 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423935 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.424139 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.424187 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.424726 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.424878 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425041 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425081 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425124 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425167 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425240 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425316 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425360 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425405 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425447 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425490 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425532 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421767 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421803 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421798 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421834 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425747 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425797 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425845 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425891 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425930 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425978 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.426021 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.426066 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.426106 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.426151 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.426196 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.426232 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.426303 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.426352 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.426531 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421982 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.421997 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422646 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422744 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422821 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422847 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422873 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.426529 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.426653 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423060 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423069 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423093 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423902 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.423949 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.424067 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.424293 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.424356 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.424322 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.424376 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.424506 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.424673 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.425587 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422162 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.426847 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.426916 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.427396 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.427620 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.427775 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.428013 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.428321 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.428386 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.428018 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.428040 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.429032 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.429107 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.429131 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.429154 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.429160 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.429408 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.429480 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.428537 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.429691 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.429748 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.429896 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.430116 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.430279 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.429724 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.430808 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.430699 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.431835 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.431985 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.432172 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.432293 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.432443 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.432494 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.432548 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.432630 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.432660 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.432734 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.432856 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.430492 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.432936 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433074 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433145 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433222 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433259 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433285 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433310 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433335 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433361 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433421 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433446 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433467 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433490 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433511 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433529 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433542 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433718 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433574 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433736 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.433765 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.434069 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.434160 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.430866 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.434301 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.434489 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.434544 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.434559 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.434685 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.434791 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.434821 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435286 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.434909 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:34:36.934881456 +0000 UTC m=+72.325367266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435346 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435376 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435401 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435495 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435523 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435549 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435577 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435603 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435628 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435652 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435675 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435702 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435726 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435747 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435768 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435791 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435812 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435835 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435870 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435896 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435919 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435942 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435967 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435992 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436017 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436041 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436063 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436166 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436191 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436212 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436256 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436281 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436305 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436329 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436355 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436382 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436407 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436433 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436459 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436485 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436507 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436530 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436557 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436583 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436604 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436625 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436645 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436664 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436686 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436707 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436730 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436754 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436777 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436802 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436849 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436876 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436903 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436928 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436951 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436978 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437004 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437031 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437056 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437084 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437110 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437134 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437231 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437324 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435545 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435107 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435695 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.435805 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.422162 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436051 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436400 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.436741 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437446 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437449 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437550 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437601 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437632 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437647 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.437858 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.438061 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.439076 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.439136 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:36.939119573 +0000 UTC m=+72.329605363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.439391 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.439492 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.439595 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:36.939563095 +0000 UTC m=+72.330048895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.439496 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.439832 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.439942 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.440014 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.440102 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.440449 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.440538 4768 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.440701 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.441104 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.441147 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.441798 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.441972 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.441997 4768 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442009 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442020 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442726 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442743 4768 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442753 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442762 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442772 4768 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442801 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442811 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442820 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442831 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442841 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442851 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442878 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442888 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442899 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442908 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442917 4768 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442927 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442952 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442964 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442975 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442984 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.442994 4768 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443003 4768 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443013 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443087 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443096 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443107 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443117 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443128 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443158 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443169 4768 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443178 4768 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443188 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443297 4768 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443318 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443329 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443340 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443351 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443372 4768 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443382 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443392 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443401 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443411 4768 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443420 4768 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443428 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443437 4768 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443447 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443456 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443466 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443474 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443484 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443493 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443505 4768 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443516 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443526 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443536 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443547 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443557 4768 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443566 4768 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443576 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443585 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443595 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443605 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443614 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443624 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443634 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443643 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443655 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443665 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443674 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443684 4768 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443693 4768 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443701 4768 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443710 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443721 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443730 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443740 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443749 4768 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443759 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443768 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443777 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443787 4768 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443814 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443825 4768 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443837 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443848 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443861 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443874 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443883 4768 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443892 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443901 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443910 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443918 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.443927 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.452555 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.452567 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.452738 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.452988 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.453185 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.453192 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.453211 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.453520 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.453561 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.453686 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.453812 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.453689 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.453925 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.454111 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.454141 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.456512 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.456571 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.456612 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.457089 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.457407 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.462996 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.464056 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.465287 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.465329 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.465351 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.465429 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:36.965404599 +0000 UTC m=+72.355890439 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.466390 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.466465 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.466529 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.466978 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.467104 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.467296 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.467319 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.467336 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.467334 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.467390 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.467352 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.467437 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:36.967409214 +0000 UTC m=+72.357895054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.467493 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.467730 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.467748 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.468032 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.468148 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.468185 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.468189 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.468238 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.468291 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.468285 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.468501 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.468518 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.469093 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.469885 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.469890 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.469968 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.470007 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.470079 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.470510 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.470874 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.470905 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.470911 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.470995 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.471270 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.471863 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.471865 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.472325 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.472449 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.472462 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.473409 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.474104 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.474123 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.474124 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.474234 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.474420 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.474432 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.474546 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.474507 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.474655 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.474692 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.475983 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.481712 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.482609 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.492865 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.502363 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.502420 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.502442 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.502474 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.502497 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:36Z","lastTransitionTime":"2026-02-23T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.505364 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.506219 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.508917 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545189 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545263 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545322 4768 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545336 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545345 4768 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545359 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545371 4768 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545384 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545396 4768 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545407 4768 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545418 4768 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545428 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545438 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545448 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545459 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545469 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545480 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545491 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545502 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545511 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545522 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545532 4768 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545541 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545549 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545551 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545559 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545640 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545661 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545681 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545700 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545371 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545718 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545757 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545768 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545780 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545790 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545800 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545809 4768 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545818 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545827 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545836 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545846 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545855 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545865 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545874 4768 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545883 4768 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545892 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545901 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545910 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545920 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545929 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545939 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545948 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545959 4768 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545970 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545979 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545988 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.545997 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546006 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546015 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546024 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546032 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546041 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546050 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546058 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546068 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546076 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546085 4768 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546093 4768 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546101 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546109 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546117 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546126 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546135 4768 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546144 4768 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546152 4768 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546160 4768 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546169 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546178 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546186 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546194 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546203 4768 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546211 4768 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546220 4768 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546228 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546236 4768 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546257 4768 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546266 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546273 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546281 4768 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546289 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546297 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546305 4768 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546313 4768 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546321 4768 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546329 4768 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546337 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546345 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546353 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.546361 4768 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.574539 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.580534 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.586227 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.591396 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.593548 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.605015 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.605059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.605072 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.605094 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.605132 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:36Z","lastTransitionTime":"2026-02-23T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.608544 4768 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 18:34:36 crc kubenswrapper[4768]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 18:34:36 crc kubenswrapper[4768]: if [[ -f "/env/_master" ]]; then Feb 23 18:34:36 crc kubenswrapper[4768]: set -o allexport Feb 23 18:34:36 crc kubenswrapper[4768]: source "/env/_master" Feb 23 18:34:36 crc kubenswrapper[4768]: set +o allexport Feb 23 18:34:36 crc kubenswrapper[4768]: fi Feb 23 18:34:36 crc kubenswrapper[4768]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 23 18:34:36 crc kubenswrapper[4768]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 23 18:34:36 crc kubenswrapper[4768]: ho_enable="--enable-hybrid-overlay" Feb 23 18:34:36 crc kubenswrapper[4768]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 23 18:34:36 crc kubenswrapper[4768]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 23 18:34:36 crc kubenswrapper[4768]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 23 18:34:36 crc kubenswrapper[4768]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 18:34:36 crc kubenswrapper[4768]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 23 18:34:36 crc kubenswrapper[4768]: --webhook-host=127.0.0.1 \ Feb 23 18:34:36 crc kubenswrapper[4768]: --webhook-port=9743 \ Feb 23 18:34:36 crc kubenswrapper[4768]: ${ho_enable} \ Feb 23 18:34:36 crc kubenswrapper[4768]: --enable-interconnect \ Feb 23 18:34:36 crc kubenswrapper[4768]: --disable-approver \ Feb 23 18:34:36 crc kubenswrapper[4768]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 23 18:34:36 crc kubenswrapper[4768]: --wait-for-kubernetes-api=200s \ Feb 23 18:34:36 crc kubenswrapper[4768]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 23 18:34:36 crc kubenswrapper[4768]: --loglevel="${LOGLEVEL}" Feb 23 18:34:36 crc kubenswrapper[4768]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 18:34:36 crc kubenswrapper[4768]: > logger="UnhandledError" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.611754 4768 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 18:34:36 crc kubenswrapper[4768]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 18:34:36 crc kubenswrapper[4768]: if [[ -f "/env/_master" ]]; then Feb 23 18:34:36 crc kubenswrapper[4768]: set -o allexport Feb 23 18:34:36 crc kubenswrapper[4768]: source "/env/_master" Feb 23 18:34:36 crc kubenswrapper[4768]: set +o allexport Feb 23 18:34:36 crc kubenswrapper[4768]: fi Feb 23 18:34:36 crc kubenswrapper[4768]: Feb 23 18:34:36 crc kubenswrapper[4768]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 23 18:34:36 crc kubenswrapper[4768]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 18:34:36 crc kubenswrapper[4768]: --disable-webhook \ Feb 23 18:34:36 crc kubenswrapper[4768]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 23 18:34:36 crc kubenswrapper[4768]: --loglevel="${LOGLEVEL}" Feb 23 18:34:36 crc kubenswrapper[4768]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 18:34:36 crc kubenswrapper[4768]: > logger="UnhandledError" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.612555 4768 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 18:34:36 crc kubenswrapper[4768]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 23 18:34:36 crc kubenswrapper[4768]: set -o allexport Feb 23 18:34:36 crc kubenswrapper[4768]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 23 18:34:36 crc kubenswrapper[4768]: source /etc/kubernetes/apiserver-url.env Feb 23 18:34:36 crc kubenswrapper[4768]: else Feb 23 18:34:36 crc kubenswrapper[4768]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 23 18:34:36 crc kubenswrapper[4768]: exit 1 Feb 23 18:34:36 crc kubenswrapper[4768]: fi Feb 23 18:34:36 crc kubenswrapper[4768]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 23 18:34:36 crc kubenswrapper[4768]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 18:34:36 crc kubenswrapper[4768]: > logger="UnhandledError" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.612840 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.614065 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.671237 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b45f9b83af80fc1f85114bcea17b16197eb0253bf1f40cc67131dc1d3ed48e50"} Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.672290 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8c3ca3a79ce93b88528f96d68c3ba99fdd72ff5933a7d5188308045ff783305c"} Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.673668 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c3af95446d623ae9c46d08c93046c2118ebfbbb9ba75c9140ab727b78277d74a"} Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.674229 4768 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 18:34:36 crc kubenswrapper[4768]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 18:34:36 crc kubenswrapper[4768]: if [[ -f "/env/_master" ]]; then Feb 23 18:34:36 crc kubenswrapper[4768]: set -o allexport Feb 23 18:34:36 crc kubenswrapper[4768]: source "/env/_master" Feb 23 18:34:36 crc kubenswrapper[4768]: set +o allexport Feb 23 18:34:36 crc kubenswrapper[4768]: fi Feb 23 18:34:36 crc kubenswrapper[4768]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 23 18:34:36 crc kubenswrapper[4768]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 23 18:34:36 crc kubenswrapper[4768]: ho_enable="--enable-hybrid-overlay" Feb 23 18:34:36 crc kubenswrapper[4768]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 23 18:34:36 crc kubenswrapper[4768]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 23 18:34:36 crc kubenswrapper[4768]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 23 18:34:36 crc kubenswrapper[4768]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 18:34:36 crc kubenswrapper[4768]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 23 18:34:36 crc kubenswrapper[4768]: --webhook-host=127.0.0.1 \ Feb 23 18:34:36 crc kubenswrapper[4768]: --webhook-port=9743 \ Feb 23 18:34:36 crc kubenswrapper[4768]: ${ho_enable} \ Feb 23 18:34:36 crc kubenswrapper[4768]: --enable-interconnect \ Feb 23 18:34:36 crc kubenswrapper[4768]: --disable-approver \ Feb 23 18:34:36 crc kubenswrapper[4768]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 23 18:34:36 crc kubenswrapper[4768]: --wait-for-kubernetes-api=200s \ Feb 23 18:34:36 crc kubenswrapper[4768]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 23 18:34:36 crc kubenswrapper[4768]: --loglevel="${LOGLEVEL}" Feb 23 18:34:36 crc kubenswrapper[4768]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 18:34:36 crc kubenswrapper[4768]: > logger="UnhandledError" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.674842 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.674944 4768 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 18:34:36 crc kubenswrapper[4768]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 23 18:34:36 crc kubenswrapper[4768]: set -o allexport Feb 23 18:34:36 crc kubenswrapper[4768]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 23 18:34:36 crc kubenswrapper[4768]: source /etc/kubernetes/apiserver-url.env Feb 23 18:34:36 crc kubenswrapper[4768]: else Feb 23 18:34:36 crc kubenswrapper[4768]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 23 18:34:36 crc kubenswrapper[4768]: exit 1 Feb 23 18:34:36 crc kubenswrapper[4768]: fi Feb 23 18:34:36 crc kubenswrapper[4768]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 23 18:34:36 crc kubenswrapper[4768]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 18:34:36 crc kubenswrapper[4768]: > logger="UnhandledError" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.675986 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.676027 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.676643 4768 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 23 18:34:36 crc kubenswrapper[4768]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 23 18:34:36 crc kubenswrapper[4768]: if [[ -f "/env/_master" ]]; then Feb 23 18:34:36 crc kubenswrapper[4768]: set -o allexport Feb 23 18:34:36 crc kubenswrapper[4768]: source "/env/_master" Feb 23 18:34:36 crc kubenswrapper[4768]: set +o allexport Feb 23 18:34:36 crc kubenswrapper[4768]: fi Feb 23 18:34:36 crc kubenswrapper[4768]: Feb 23 18:34:36 crc kubenswrapper[4768]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 23 18:34:36 crc kubenswrapper[4768]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 23 18:34:36 crc kubenswrapper[4768]: --disable-webhook \ Feb 23 18:34:36 crc kubenswrapper[4768]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 23 18:34:36 crc kubenswrapper[4768]: --loglevel="${LOGLEVEL}" Feb 23 18:34:36 crc kubenswrapper[4768]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 23 18:34:36 crc kubenswrapper[4768]: > logger="UnhandledError" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.677808 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.684628 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.687297 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.687318 4768 scope.go:117] "RemoveContainer" containerID="70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.687671 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.695121 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.703098 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.707737 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.707768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.707777 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.707795 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.707805 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:36Z","lastTransitionTime":"2026-02-23T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.713774 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.723588 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.736044 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.747638 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.758802 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.771202 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.781496 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.794581 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.803370 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.809472 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.809502 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.809512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.809527 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.809538 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:36Z","lastTransitionTime":"2026-02-23T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.815137 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.911934 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.912015 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.912037 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.912085 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.912109 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:36Z","lastTransitionTime":"2026-02-23T18:34:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.950018 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.950117 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.950202 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:34:37.950175783 +0000 UTC m=+73.340661593 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:34:36 crc kubenswrapper[4768]: I0223 18:34:36.950279 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.950332 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.950389 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.950404 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:37.950384339 +0000 UTC m=+73.340870179 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:34:36 crc kubenswrapper[4768]: E0223 18:34:36.951068 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:37.951054327 +0000 UTC m=+73.341540127 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.014575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.014872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.014931 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.014999 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.015066 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:37Z","lastTransitionTime":"2026-02-23T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.051384 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.051577 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.051675 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.051719 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.051737 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.051740 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.051772 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.051784 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.051796 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:38.051776968 +0000 UTC m=+73.442262778 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.051846 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:38.051824549 +0000 UTC m=+73.442310409 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.072096 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.086508 4768 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.116046 4768 csr.go:261] certificate signing request csr-jggx5 is approved, waiting to be issued Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.117159 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.117192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.117201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.117214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.117223 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:37Z","lastTransitionTime":"2026-02-23T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.125032 4768 csr.go:257] certificate signing request csr-jggx5 is issued Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.219677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.219974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.220107 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.220233 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.220392 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:37Z","lastTransitionTime":"2026-02-23T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.298027 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:03:34.957335254 +0000 UTC Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.307530 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.307658 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.313234 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.314115 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.315417 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.316852 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.317675 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.319035 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.319837 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.321120 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.322289 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.322313 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.322320 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.322333 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.322343 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:37Z","lastTransitionTime":"2026-02-23T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.322612 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.324619 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.325121 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.325878 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.326520 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.327101 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.327830 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.328429 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.329085 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.329875 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.330421 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.331396 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.331847 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.332383 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.333169 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.333832 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.334654 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.335225 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.336161 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.336623 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.337268 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.338276 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.338718 4768 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.338814 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.340726 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.341213 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.341663 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.343547 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.344179 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.345024 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.345663 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.346820 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.347307 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.347862 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.348828 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.349734 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.350192 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.351046 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.351535 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.352580 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.353038 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.354039 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.354602 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.355174 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.356124 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.356620 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.424964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.425005 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.425016 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.425031 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.425040 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:37Z","lastTransitionTime":"2026-02-23T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.527381 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.527420 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.527432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.527446 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.527455 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:37Z","lastTransitionTime":"2026-02-23T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.630436 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.630473 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.630481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.630495 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.630508 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:37Z","lastTransitionTime":"2026-02-23T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.676450 4768 scope.go:117] "RemoveContainer" containerID="70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6" Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.676675 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.733149 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.733186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.733194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.733209 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.733220 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:37Z","lastTransitionTime":"2026-02-23T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.835287 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.835346 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.835363 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.835389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.835404 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:37Z","lastTransitionTime":"2026-02-23T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.938544 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.938608 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.938625 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.938655 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.938676 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:37Z","lastTransitionTime":"2026-02-23T18:34:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.959320 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.959463 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:37 crc kubenswrapper[4768]: I0223 18:34:37.959503 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.959544 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:34:39.959506369 +0000 UTC m=+75.349992209 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.959608 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.959689 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:39.959669683 +0000 UTC m=+75.350155473 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.959697 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:34:37 crc kubenswrapper[4768]: E0223 18:34:37.959789 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:39.959766776 +0000 UTC m=+75.350252576 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.042001 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.042052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.042062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.042079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.042091 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:38Z","lastTransitionTime":"2026-02-23T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.061067 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.061129 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:38 crc kubenswrapper[4768]: E0223 18:34:38.061299 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:34:38 crc kubenswrapper[4768]: E0223 18:34:38.061324 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:34:38 crc kubenswrapper[4768]: E0223 18:34:38.061339 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:38 crc kubenswrapper[4768]: E0223 18:34:38.061340 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:34:38 crc kubenswrapper[4768]: E0223 18:34:38.061368 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:34:38 crc kubenswrapper[4768]: E0223 18:34:38.061388 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:38 crc kubenswrapper[4768]: E0223 18:34:38.061400 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:40.061380131 +0000 UTC m=+75.451865941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:38 crc kubenswrapper[4768]: E0223 18:34:38.061455 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:40.061434323 +0000 UTC m=+75.451920163 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.126802 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-23 18:29:37 +0000 UTC, rotation deadline is 2027-01-03 01:28:51.213650325 +0000 UTC Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.126886 4768 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7518h54m13.086770435s for next certificate rotation Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.145198 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.145283 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.145298 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.145322 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.145336 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:38Z","lastTransitionTime":"2026-02-23T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.248833 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.248908 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.248926 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.248956 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.248978 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:38Z","lastTransitionTime":"2026-02-23T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.299661 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:09:01.380997444 +0000 UTC Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.307093 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.307229 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:38 crc kubenswrapper[4768]: E0223 18:34:38.307429 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:34:38 crc kubenswrapper[4768]: E0223 18:34:38.307836 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.352405 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.352472 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.352494 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.352516 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.352534 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:38Z","lastTransitionTime":"2026-02-23T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.455763 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.455829 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.455848 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.455875 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.455894 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:38Z","lastTransitionTime":"2026-02-23T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.558950 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.559028 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.559056 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.559091 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.559114 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:38Z","lastTransitionTime":"2026-02-23T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.662059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.662105 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.662125 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.662152 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.662167 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:38Z","lastTransitionTime":"2026-02-23T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.765845 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.765911 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.765929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.765955 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.765972 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:38Z","lastTransitionTime":"2026-02-23T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.868304 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.868371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.868391 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.868422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.868442 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:38Z","lastTransitionTime":"2026-02-23T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.970983 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.971046 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.971068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.971099 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:38 crc kubenswrapper[4768]: I0223 18:34:38.971122 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:38Z","lastTransitionTime":"2026-02-23T18:34:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.074194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.074243 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.074291 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.074314 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.074330 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:39Z","lastTransitionTime":"2026-02-23T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.177725 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.177791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.177807 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.177831 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.177850 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:39Z","lastTransitionTime":"2026-02-23T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.218536 4768 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.281184 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.281314 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.281342 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.281371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.281392 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:39Z","lastTransitionTime":"2026-02-23T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.300712 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 09:35:35.610297948 +0000 UTC Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.307192 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:39 crc kubenswrapper[4768]: E0223 18:34:39.307486 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.385421 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.385465 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.385479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.385498 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.385512 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:39Z","lastTransitionTime":"2026-02-23T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.487963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.488030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.488053 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.488084 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.488104 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:39Z","lastTransitionTime":"2026-02-23T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.595017 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.595098 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.595120 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.595151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.595175 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:39Z","lastTransitionTime":"2026-02-23T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.697733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.697782 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.697798 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.697820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.697835 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:39Z","lastTransitionTime":"2026-02-23T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.801361 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.801421 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.801457 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.801486 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.801508 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:39Z","lastTransitionTime":"2026-02-23T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.904003 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.904055 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.904071 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.904093 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.904109 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:39Z","lastTransitionTime":"2026-02-23T18:34:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.978860 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.978967 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:39 crc kubenswrapper[4768]: I0223 18:34:39.979012 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:39 crc kubenswrapper[4768]: E0223 18:34:39.979147 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:34:39 crc kubenswrapper[4768]: E0223 18:34:39.979219 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:43.97919796 +0000 UTC m=+79.369683790 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:34:39 crc kubenswrapper[4768]: E0223 18:34:39.979370 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:34:43.979350214 +0000 UTC m=+79.369836054 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:34:39 crc kubenswrapper[4768]: E0223 18:34:39.979472 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:34:39 crc kubenswrapper[4768]: E0223 18:34:39.979514 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:43.979502408 +0000 UTC m=+79.369988238 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.007595 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.007646 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.007662 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.007684 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.007703 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:40Z","lastTransitionTime":"2026-02-23T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.031648 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.047973 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.051488 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.068153 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.083527 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.083638 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:40 crc kubenswrapper[4768]: E0223 18:34:40.083982 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:34:40 crc kubenswrapper[4768]: E0223 18:34:40.084015 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:34:40 crc kubenswrapper[4768]: E0223 18:34:40.084035 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:40 crc kubenswrapper[4768]: E0223 18:34:40.084100 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:44.084079475 +0000 UTC m=+79.474565305 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:40 crc kubenswrapper[4768]: E0223 18:34:40.084610 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:34:40 crc kubenswrapper[4768]: E0223 18:34:40.084648 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:34:40 crc kubenswrapper[4768]: E0223 18:34:40.084664 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:40 crc kubenswrapper[4768]: E0223 18:34:40.084714 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:44.084698543 +0000 UTC m=+79.475184373 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.087730 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.106745 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.111868 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.111922 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.111941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.111966 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.111984 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:40Z","lastTransitionTime":"2026-02-23T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.126486 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.144637 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.162229 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.216124 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.216197 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.216214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.216240 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.216294 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:40Z","lastTransitionTime":"2026-02-23T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.301358 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 10:20:11.786773723 +0000 UTC Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.306994 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:40 crc kubenswrapper[4768]: E0223 18:34:40.307239 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.309448 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:40 crc kubenswrapper[4768]: E0223 18:34:40.309596 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.318682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.318756 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.318774 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.318803 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.318822 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:40Z","lastTransitionTime":"2026-02-23T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.421021 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.421103 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.421120 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.421144 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.421162 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:40Z","lastTransitionTime":"2026-02-23T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.524170 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.524274 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.524298 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.524325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.524342 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:40Z","lastTransitionTime":"2026-02-23T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.626709 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.626769 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.626787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.626809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.626826 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:40Z","lastTransitionTime":"2026-02-23T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.730589 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.730693 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.730710 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.730736 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.730753 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:40Z","lastTransitionTime":"2026-02-23T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.835054 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.835183 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.835211 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.835244 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.835309 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:40Z","lastTransitionTime":"2026-02-23T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.938586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.938668 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.938690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.938723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:40 crc kubenswrapper[4768]: I0223 18:34:40.938746 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:40Z","lastTransitionTime":"2026-02-23T18:34:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.042138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.042208 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.042229 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.042294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.042316 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:41Z","lastTransitionTime":"2026-02-23T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.144622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.144678 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.144696 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.144721 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.144738 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:41Z","lastTransitionTime":"2026-02-23T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.248689 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.248757 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.248770 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.248798 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.248816 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:41Z","lastTransitionTime":"2026-02-23T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.302220 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:38:16.951504283 +0000 UTC Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.307673 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:41 crc kubenswrapper[4768]: E0223 18:34:41.307875 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.351901 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.351963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.351975 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.352007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.352023 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:41Z","lastTransitionTime":"2026-02-23T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.455711 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.455791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.455831 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.455862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.455885 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:41Z","lastTransitionTime":"2026-02-23T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.559009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.559085 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.559111 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.559142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.559166 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:41Z","lastTransitionTime":"2026-02-23T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.662355 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.662431 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.662456 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.662487 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.662508 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:41Z","lastTransitionTime":"2026-02-23T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.765738 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.765797 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.765820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.765847 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.765863 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:41Z","lastTransitionTime":"2026-02-23T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.868608 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.868654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.868670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.868693 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.868710 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:41Z","lastTransitionTime":"2026-02-23T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.972292 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.972352 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.972372 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.972397 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:41 crc kubenswrapper[4768]: I0223 18:34:41.972415 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:41Z","lastTransitionTime":"2026-02-23T18:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.075694 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.075783 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.075807 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.075841 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.075866 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:42Z","lastTransitionTime":"2026-02-23T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.178971 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.179030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.179047 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.179084 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.179101 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:42Z","lastTransitionTime":"2026-02-23T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.282627 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.282691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.282708 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.282731 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.282750 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:42Z","lastTransitionTime":"2026-02-23T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.303419 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:37:51.864092074 +0000 UTC Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.306935 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:42 crc kubenswrapper[4768]: E0223 18:34:42.307155 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.307203 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:42 crc kubenswrapper[4768]: E0223 18:34:42.307717 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.316607 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.385920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.385992 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.386015 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.386049 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.386072 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:42Z","lastTransitionTime":"2026-02-23T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.489687 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.489737 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.489754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.489777 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.489793 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:42Z","lastTransitionTime":"2026-02-23T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.595115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.595187 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.595209 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.595236 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.595292 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:42Z","lastTransitionTime":"2026-02-23T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.698025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.698128 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.698148 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.698202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.698220 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:42Z","lastTransitionTime":"2026-02-23T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.801501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.801570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.801590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.801613 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.801633 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:42Z","lastTransitionTime":"2026-02-23T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.904147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.904191 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.904201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.904215 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:42 crc kubenswrapper[4768]: I0223 18:34:42.904226 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:42Z","lastTransitionTime":"2026-02-23T18:34:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.006768 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.006854 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.006872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.006895 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.006950 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:43Z","lastTransitionTime":"2026-02-23T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.110325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.110484 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.110503 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.110526 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.110543 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:43Z","lastTransitionTime":"2026-02-23T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.214096 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.214146 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.214163 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.214188 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.214206 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:43Z","lastTransitionTime":"2026-02-23T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.303552 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 08:41:39.314270846 +0000 UTC Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.307029 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:43 crc kubenswrapper[4768]: E0223 18:34:43.307198 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.317972 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.318066 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.318083 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.318128 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.318145 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:43Z","lastTransitionTime":"2026-02-23T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.420915 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.421383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.421630 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.421792 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.421969 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:43Z","lastTransitionTime":"2026-02-23T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.525300 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.525379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.525399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.525428 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.525448 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:43Z","lastTransitionTime":"2026-02-23T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.629185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.629297 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.629335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.629367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.629391 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:43Z","lastTransitionTime":"2026-02-23T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.734203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.734324 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.734352 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.734379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.734398 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:43Z","lastTransitionTime":"2026-02-23T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.838214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.838324 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.838352 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.838384 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.838405 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:43Z","lastTransitionTime":"2026-02-23T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.941193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.941287 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.941307 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.941333 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:43 crc kubenswrapper[4768]: I0223 18:34:43.941352 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:43Z","lastTransitionTime":"2026-02-23T18:34:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.022638 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.022770 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.022873 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.022963 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:34:52.022909282 +0000 UTC m=+87.413395142 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.022995 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.023024 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.023126 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:52.023098887 +0000 UTC m=+87.413584717 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.023165 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:52.023152798 +0000 UTC m=+87.413638628 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.044496 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.044556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.044573 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.044597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.044614 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:44Z","lastTransitionTime":"2026-02-23T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.124059 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.124148 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.124329 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.124382 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.124403 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.124436 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.124473 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.124484 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:52.124458166 +0000 UTC m=+87.514944006 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.124497 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.124582 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 18:34:52.124556009 +0000 UTC m=+87.515041849 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.148089 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.148424 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.148515 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.148543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.148559 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:44Z","lastTransitionTime":"2026-02-23T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.252109 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.252194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.252219 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.252294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.252322 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:44Z","lastTransitionTime":"2026-02-23T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.304090 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:09:39.394740107 +0000 UTC Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.307705 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.307810 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.307921 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:34:44 crc kubenswrapper[4768]: E0223 18:34:44.308019 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.355919 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.355968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.355987 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.356010 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.356026 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:44Z","lastTransitionTime":"2026-02-23T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.459385 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.459652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.459717 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.459803 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.459860 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:44Z","lastTransitionTime":"2026-02-23T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.562923 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.562969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.562988 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.563011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.563028 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:44Z","lastTransitionTime":"2026-02-23T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.665882 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.666238 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.666446 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.666587 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.666715 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:44Z","lastTransitionTime":"2026-02-23T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.772661 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.772759 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.772802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.772839 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.772862 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:44Z","lastTransitionTime":"2026-02-23T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.875889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.876339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.876513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.876657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.876792 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:44Z","lastTransitionTime":"2026-02-23T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.979726 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.979774 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.979791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.979817 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.979836 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:44Z","lastTransitionTime":"2026-02-23T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.998333 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.998399 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.998417 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.998443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:44 crc kubenswrapper[4768]: I0223 18:34:44.998464 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:44Z","lastTransitionTime":"2026-02-23T18:34:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:45 crc kubenswrapper[4768]: E0223 18:34:45.014201 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.019611 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.019679 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.019697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.019725 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.019746 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:45Z","lastTransitionTime":"2026-02-23T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:45 crc kubenswrapper[4768]: E0223 18:34:45.035155 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.039778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.039841 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.039860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.039885 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.039905 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:45Z","lastTransitionTime":"2026-02-23T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:45 crc kubenswrapper[4768]: E0223 18:34:45.055487 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.060481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.060562 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.060586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.060617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.060639 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:45Z","lastTransitionTime":"2026-02-23T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:45 crc kubenswrapper[4768]: E0223 18:34:45.077525 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.082412 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.082553 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.082654 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.082753 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.082865 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:45Z","lastTransitionTime":"2026-02-23T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.085695 4768 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 23 18:34:45 crc kubenswrapper[4768]: E0223 18:34:45.086997 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": read tcp 38.102.83.115:54554->38.102.83.115:6443: use of closed network connection" Feb 23 18:34:45 crc kubenswrapper[4768]: E0223 18:34:45.087307 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.094711 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.094754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.094767 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.094996 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.095013 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:45Z","lastTransitionTime":"2026-02-23T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.198750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.198809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.198826 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.198850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.198870 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:45Z","lastTransitionTime":"2026-02-23T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.302384 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.302453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.302476 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.302505 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.302547 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:45Z","lastTransitionTime":"2026-02-23T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.304995 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:04:23.015350674 +0000 UTC Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.307502 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:45 crc kubenswrapper[4768]: E0223 18:34:45.307700 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.330015 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.345275 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.360535 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.377871 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.396031 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.405730 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.405775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.405787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.405804 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.405815 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:45Z","lastTransitionTime":"2026-02-23T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.422239 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.443623 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.456311 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.470497 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.508710 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.508774 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.508791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.508820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.508840 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:45Z","lastTransitionTime":"2026-02-23T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.612244 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.612350 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.612368 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.612397 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.612416 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:45Z","lastTransitionTime":"2026-02-23T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.669053 4768 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.715015 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.715447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.715636 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.715858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.715998 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:45Z","lastTransitionTime":"2026-02-23T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.819023 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.819098 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.819133 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.819153 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.819167 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:45Z","lastTransitionTime":"2026-02-23T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.922223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.922327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.922345 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.922370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:45 crc kubenswrapper[4768]: I0223 18:34:45.922388 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:45Z","lastTransitionTime":"2026-02-23T18:34:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.025462 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.025515 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.025531 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.025556 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.025572 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:46Z","lastTransitionTime":"2026-02-23T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.128394 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.128462 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.128484 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.128518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.128553 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:46Z","lastTransitionTime":"2026-02-23T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.231223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.231310 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.231328 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.231350 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.231367 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:46Z","lastTransitionTime":"2026-02-23T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.305550 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 04:33:08.281484635 +0000 UTC Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.306891 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.306898 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:46 crc kubenswrapper[4768]: E0223 18:34:46.307062 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:34:46 crc kubenswrapper[4768]: E0223 18:34:46.307173 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.334475 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.334533 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.334551 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.334573 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.334591 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:46Z","lastTransitionTime":"2026-02-23T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.437720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.437781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.437802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.437828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.438037 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:46Z","lastTransitionTime":"2026-02-23T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.541453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.541536 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.541560 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.541586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.541606 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:46Z","lastTransitionTime":"2026-02-23T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.644801 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.644850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.644862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.644880 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.644893 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:46Z","lastTransitionTime":"2026-02-23T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.748224 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.748314 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.748340 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.748363 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.748381 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:46Z","lastTransitionTime":"2026-02-23T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.851528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.851622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.851644 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.851678 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.851698 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:46Z","lastTransitionTime":"2026-02-23T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.954244 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.954298 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.954306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.954320 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:46 crc kubenswrapper[4768]: I0223 18:34:46.954330 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:46Z","lastTransitionTime":"2026-02-23T18:34:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.059886 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.059951 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.059965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.059982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.059995 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:47Z","lastTransitionTime":"2026-02-23T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.163711 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.163843 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.163869 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.164433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.164480 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:47Z","lastTransitionTime":"2026-02-23T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.267454 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.267528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.267546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.267573 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.267589 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:47Z","lastTransitionTime":"2026-02-23T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.306351 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:08:35.30769169 +0000 UTC Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.306601 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:47 crc kubenswrapper[4768]: E0223 18:34:47.306777 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.370305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.370364 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.370384 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.370411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.370433 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:47Z","lastTransitionTime":"2026-02-23T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.473776 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.473861 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.473878 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.473930 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.473948 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:47Z","lastTransitionTime":"2026-02-23T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.577062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.577148 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.577159 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.577183 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.577197 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:47Z","lastTransitionTime":"2026-02-23T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.679396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.679445 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.679459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.679479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.679494 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:47Z","lastTransitionTime":"2026-02-23T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.782433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.782507 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.782525 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.782552 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.782574 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:47Z","lastTransitionTime":"2026-02-23T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.886330 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.886458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.886477 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.886504 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.886530 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:47Z","lastTransitionTime":"2026-02-23T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.990373 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.990433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.990445 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.990466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:47 crc kubenswrapper[4768]: I0223 18:34:47.990478 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:47Z","lastTransitionTime":"2026-02-23T18:34:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.093396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.093508 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.093534 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.093565 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.093588 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:48Z","lastTransitionTime":"2026-02-23T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.197267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.197362 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.197388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.197436 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.197454 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:48Z","lastTransitionTime":"2026-02-23T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.300643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.300713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.300724 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.300747 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.300760 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:48Z","lastTransitionTime":"2026-02-23T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.306888 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 05:20:57.027871496 +0000 UTC Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.307103 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.307107 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:48 crc kubenswrapper[4768]: E0223 18:34:48.307390 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:34:48 crc kubenswrapper[4768]: E0223 18:34:48.307501 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.404502 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.404592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.404615 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.404649 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.404672 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:48Z","lastTransitionTime":"2026-02-23T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.507717 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.507776 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.507796 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.507823 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.507846 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:48Z","lastTransitionTime":"2026-02-23T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.610605 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.610740 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.610766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.610800 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.610826 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:48Z","lastTransitionTime":"2026-02-23T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.714286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.714369 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.714388 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.714419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.714435 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:48Z","lastTransitionTime":"2026-02-23T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.817814 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.817912 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.817932 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.817993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.818013 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:48Z","lastTransitionTime":"2026-02-23T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.921679 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.921746 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.921765 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.921791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:48 crc kubenswrapper[4768]: I0223 18:34:48.921809 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:48Z","lastTransitionTime":"2026-02-23T18:34:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.024849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.024917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.024934 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.024958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.024976 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:49Z","lastTransitionTime":"2026-02-23T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.128145 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.128214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.128232 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.128292 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.128310 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:49Z","lastTransitionTime":"2026-02-23T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.231107 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.231182 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.231204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.231227 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.231250 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:49Z","lastTransitionTime":"2026-02-23T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.307327 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.307356 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:12:35.216213465 +0000 UTC Feb 23 18:34:49 crc kubenswrapper[4768]: E0223 18:34:49.307755 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.334150 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.334483 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.334497 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.334515 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.334530 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:49Z","lastTransitionTime":"2026-02-23T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.437482 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.437538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.437552 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.437574 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.437586 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:49Z","lastTransitionTime":"2026-02-23T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.540808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.540862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.540881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.540904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.540920 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:49Z","lastTransitionTime":"2026-02-23T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.644071 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.644821 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.644852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.644881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.644900 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:49Z","lastTransitionTime":"2026-02-23T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.717687 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d"} Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.734961 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.749366 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.749431 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.749448 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.749477 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.749495 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:49Z","lastTransitionTime":"2026-02-23T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.755595 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.770678 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.789686 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.805360 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.821609 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.837717 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.850883 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.852733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.852791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.852810 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.852837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.852857 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:49Z","lastTransitionTime":"2026-02-23T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.866355 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.956572 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.956635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.956653 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.956686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:49 crc kubenswrapper[4768]: I0223 18:34:49.956702 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:49Z","lastTransitionTime":"2026-02-23T18:34:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.059850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.060223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.060238 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.060280 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.060292 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:50Z","lastTransitionTime":"2026-02-23T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.164429 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.164511 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.164527 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.164555 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.164573 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:50Z","lastTransitionTime":"2026-02-23T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.267993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.268446 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.268617 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.268779 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.268953 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:50Z","lastTransitionTime":"2026-02-23T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.307408 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.307461 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.307533 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 02:25:59.302366568 +0000 UTC Feb 23 18:34:50 crc kubenswrapper[4768]: E0223 18:34:50.307603 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:34:50 crc kubenswrapper[4768]: E0223 18:34:50.307807 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.309009 4768 scope.go:117] "RemoveContainer" containerID="70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6" Feb 23 18:34:50 crc kubenswrapper[4768]: E0223 18:34:50.309341 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.371340 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.371408 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.371432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.371460 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.371483 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:50Z","lastTransitionTime":"2026-02-23T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.473649 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.473697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.473707 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.473753 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.473769 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:50Z","lastTransitionTime":"2026-02-23T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.576051 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.576113 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.576130 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.576158 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.576176 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:50Z","lastTransitionTime":"2026-02-23T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.679903 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.679979 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.679998 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.680024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.680041 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:50Z","lastTransitionTime":"2026-02-23T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.723947 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c"} Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.724021 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e"} Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.742890 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.770992 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.783093 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.783139 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.783151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.783171 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.783189 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:50Z","lastTransitionTime":"2026-02-23T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.785727 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.802230 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.815835 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.828634 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.842136 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.857341 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.871748 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.885435 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.885481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.885494 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.885513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.885527 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:50Z","lastTransitionTime":"2026-02-23T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.987424 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.987476 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.987490 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.987508 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:50 crc kubenswrapper[4768]: I0223 18:34:50.987520 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:50Z","lastTransitionTime":"2026-02-23T18:34:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.090104 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.090147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.090157 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.090173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.090184 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:51Z","lastTransitionTime":"2026-02-23T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.192357 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.192420 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.192437 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.192463 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.192480 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:51Z","lastTransitionTime":"2026-02-23T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.298069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.298155 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.298183 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.298217 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.298291 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:51Z","lastTransitionTime":"2026-02-23T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.307228 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.307797 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 21:10:26.442903043 +0000 UTC Feb 23 18:34:51 crc kubenswrapper[4768]: E0223 18:34:51.307835 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.332056 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.400886 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.400945 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.400965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.401027 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.401044 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:51Z","lastTransitionTime":"2026-02-23T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.504288 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.504673 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.504760 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.504867 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.505005 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:51Z","lastTransitionTime":"2026-02-23T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.608173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.608640 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.608788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.608840 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.608862 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:51Z","lastTransitionTime":"2026-02-23T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.711204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.711305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.711331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.711362 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.711385 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:51Z","lastTransitionTime":"2026-02-23T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.814389 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.814472 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.814490 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.814517 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.814536 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:51Z","lastTransitionTime":"2026-02-23T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.917880 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.918397 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.918607 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.918860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:51 crc kubenswrapper[4768]: I0223 18:34:51.919033 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:51Z","lastTransitionTime":"2026-02-23T18:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.022752 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.022817 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.022835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.022862 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.022882 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:52Z","lastTransitionTime":"2026-02-23T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.099937 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.100034 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.100063 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.100173 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.100234 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:08.100208645 +0000 UTC m=+103.490694475 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.100501 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:35:08.100443922 +0000 UTC m=+103.490929772 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.100694 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.100830 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:08.100811892 +0000 UTC m=+103.491297812 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.125632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.125682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.125695 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.125713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.125726 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:52Z","lastTransitionTime":"2026-02-23T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.200992 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.201032 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.201126 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.201141 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.201151 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.201199 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:08.201186893 +0000 UTC m=+103.591672693 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.201269 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.201321 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.201344 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.201433 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:08.201406769 +0000 UTC m=+103.591892599 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.227693 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.227735 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.227745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.227763 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.227775 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:52Z","lastTransitionTime":"2026-02-23T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.307589 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.307777 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.307618 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:52 crc kubenswrapper[4768]: E0223 18:34:52.308005 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.308036 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:06:35.066728912 +0000 UTC Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.330165 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.330330 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.330412 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.330492 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.330553 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:52Z","lastTransitionTime":"2026-02-23T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.433383 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.433436 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.433449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.433466 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.433480 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:52Z","lastTransitionTime":"2026-02-23T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.536410 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.536479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.536501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.536527 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.536545 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:52Z","lastTransitionTime":"2026-02-23T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.639306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.639392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.639416 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.639443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.639461 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:52Z","lastTransitionTime":"2026-02-23T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.742344 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.742691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.742864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.743035 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.743228 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:52Z","lastTransitionTime":"2026-02-23T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.846742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.846808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.846831 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.846860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.846880 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:52Z","lastTransitionTime":"2026-02-23T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.950385 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.950482 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.950501 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.950529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:52 crc kubenswrapper[4768]: I0223 18:34:52.950546 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:52Z","lastTransitionTime":"2026-02-23T18:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.054320 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.054368 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.054380 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.054402 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.054416 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:53Z","lastTransitionTime":"2026-02-23T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.157292 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.157363 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.157379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.157415 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.157433 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:53Z","lastTransitionTime":"2026-02-23T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.260422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.260493 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.260515 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.260545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.260567 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:53Z","lastTransitionTime":"2026-02-23T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.307670 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:53 crc kubenswrapper[4768]: E0223 18:34:53.307869 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.308435 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 01:15:40.076133444 +0000 UTC Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.363707 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.363754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.363770 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.363795 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.363813 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:53Z","lastTransitionTime":"2026-02-23T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.467038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.467102 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.467123 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.467147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.467165 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:53Z","lastTransitionTime":"2026-02-23T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.570422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.570530 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.570557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.570592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.570618 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:53Z","lastTransitionTime":"2026-02-23T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.674400 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.674787 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.674850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.674977 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.675076 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:53Z","lastTransitionTime":"2026-02-23T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.778806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.779479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.779904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.780381 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.780783 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:53Z","lastTransitionTime":"2026-02-23T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.885363 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.885446 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.885467 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.885499 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.885519 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:53Z","lastTransitionTime":"2026-02-23T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.989006 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.989069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.989079 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.989094 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:53 crc kubenswrapper[4768]: I0223 18:34:53.989103 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:53Z","lastTransitionTime":"2026-02-23T18:34:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.092063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.092120 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.092137 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.092163 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.092180 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:54Z","lastTransitionTime":"2026-02-23T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.195135 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.195316 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.195348 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.195430 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.195457 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:54Z","lastTransitionTime":"2026-02-23T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.298904 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.298980 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.299000 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.299030 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.299053 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:54Z","lastTransitionTime":"2026-02-23T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.307214 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.307314 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:54 crc kubenswrapper[4768]: E0223 18:34:54.307429 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:34:54 crc kubenswrapper[4768]: E0223 18:34:54.307599 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.309311 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 07:06:53.842662735 +0000 UTC Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.402338 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.402428 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.402478 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.402505 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.402523 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:54Z","lastTransitionTime":"2026-02-23T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.505953 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.506016 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.506033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.506057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.506076 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:54Z","lastTransitionTime":"2026-02-23T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.609595 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.609702 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.609724 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.609751 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.609771 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:54Z","lastTransitionTime":"2026-02-23T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.713538 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.713602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.713614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.713636 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.713651 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:54Z","lastTransitionTime":"2026-02-23T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.817405 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.817470 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.817487 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.817518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.817544 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:54Z","lastTransitionTime":"2026-02-23T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.921111 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.921177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.921194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.921221 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:54 crc kubenswrapper[4768]: I0223 18:34:54.921238 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:54Z","lastTransitionTime":"2026-02-23T18:34:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.024893 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.024953 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.024967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.024992 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.025007 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.128348 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.128429 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.128450 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.128483 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.128505 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.231955 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.232016 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.232033 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.232059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.232077 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.307217 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:55 crc kubenswrapper[4768]: E0223 18:34:55.307466 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.309607 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 22:45:07.091260688 +0000 UTC Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.328456 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.329438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.329593 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.329704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.329820 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.329938 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: E0223 18:34:55.350239 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.355842 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.355928 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.355949 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.355986 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.356007 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.367762 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: E0223 18:34:55.381758 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.387496 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.387575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.387594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.387622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.387650 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.399378 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.418547 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: E0223 18:34:55.420138 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.425058 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.425120 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.425138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.425163 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.425213 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.441590 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: E0223 18:34:55.448687 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.454852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.454902 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.454917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.454937 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.454948 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.463845 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: E0223 18:34:55.477892 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: E0223 18:34:55.478218 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.480492 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.480585 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.480627 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.480723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.480745 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.486440 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.508332 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.532208 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.553663 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.583194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.583232 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.583256 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.583274 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.583285 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.686392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.686443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.686487 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.686507 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.686533 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.742865 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.769677 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.789763 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.789841 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.789859 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.789886 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.789906 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.792170 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.813150 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.835397 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.884504 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.891778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.891822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.891834 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.891852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.891863 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.915157 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.932755 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.945302 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.975142 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.990920 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:34:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.993933 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.993992 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.994009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.994034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:55 crc kubenswrapper[4768]: I0223 18:34:55.994050 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:55Z","lastTransitionTime":"2026-02-23T18:34:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.097231 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.097331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.097351 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.097379 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.097406 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:56Z","lastTransitionTime":"2026-02-23T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.200867 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.200940 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.200963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.200994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.201017 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:56Z","lastTransitionTime":"2026-02-23T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.303697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.303763 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.303781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.303807 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.303825 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:56Z","lastTransitionTime":"2026-02-23T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.306942 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.306964 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:56 crc kubenswrapper[4768]: E0223 18:34:56.307049 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:34:56 crc kubenswrapper[4768]: E0223 18:34:56.307152 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.310224 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:38:51.62672864 +0000 UTC Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.407958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.408046 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.408072 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.408103 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.408125 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:56Z","lastTransitionTime":"2026-02-23T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.510485 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.510543 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.510561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.510584 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.510603 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:56Z","lastTransitionTime":"2026-02-23T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.613712 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.613780 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.613798 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.613823 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.613842 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:56Z","lastTransitionTime":"2026-02-23T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.717058 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.717560 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.717800 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.717992 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.718188 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:56Z","lastTransitionTime":"2026-02-23T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.822193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.823010 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.823207 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.823443 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.823642 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:56Z","lastTransitionTime":"2026-02-23T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.926189 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.926232 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.926263 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.926282 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:56 crc kubenswrapper[4768]: I0223 18:34:56.926294 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:56Z","lastTransitionTime":"2026-02-23T18:34:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.029582 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.029693 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.029721 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.029750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.029770 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:57Z","lastTransitionTime":"2026-02-23T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.132567 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.132635 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.132657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.132686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.132710 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:57Z","lastTransitionTime":"2026-02-23T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.235665 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.235718 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.235734 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.235760 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.235776 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:57Z","lastTransitionTime":"2026-02-23T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.307026 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:57 crc kubenswrapper[4768]: E0223 18:34:57.307351 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.310421 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 22:39:27.271838024 +0000 UTC Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.338648 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.338732 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.338750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.338778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.338797 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:57Z","lastTransitionTime":"2026-02-23T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.441990 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.442069 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.442094 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.442131 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.442155 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:57Z","lastTransitionTime":"2026-02-23T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.545004 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.545089 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.545108 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.545126 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.545141 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:57Z","lastTransitionTime":"2026-02-23T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.647327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.647396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.647418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.647450 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.647475 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:57Z","lastTransitionTime":"2026-02-23T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.749670 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.749734 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.749749 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.749772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.749788 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:57Z","lastTransitionTime":"2026-02-23T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.852777 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.852853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.852879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.852910 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.852931 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:57Z","lastTransitionTime":"2026-02-23T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.955428 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.955486 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.955504 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.955528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:57 crc kubenswrapper[4768]: I0223 18:34:57.955546 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:57Z","lastTransitionTime":"2026-02-23T18:34:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.058442 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.058503 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.058525 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.058554 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.058639 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:58Z","lastTransitionTime":"2026-02-23T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.161337 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.161422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.161439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.161461 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.161481 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:58Z","lastTransitionTime":"2026-02-23T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.265062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.265138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.265161 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.265190 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.265209 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:58Z","lastTransitionTime":"2026-02-23T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.307000 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.307090 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:34:58 crc kubenswrapper[4768]: E0223 18:34:58.307187 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:34:58 crc kubenswrapper[4768]: E0223 18:34:58.307307 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.311316 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:09:30.202336581 +0000 UTC Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.368580 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.368656 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.368683 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.368713 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.368736 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:58Z","lastTransitionTime":"2026-02-23T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.471909 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.472021 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.472050 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.472081 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.472103 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:58Z","lastTransitionTime":"2026-02-23T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.575291 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.575360 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.575377 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.575405 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.575423 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:58Z","lastTransitionTime":"2026-02-23T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.677426 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.677481 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.677497 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.677575 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.677593 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:58Z","lastTransitionTime":"2026-02-23T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.780636 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.780682 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.780699 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.780723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.780740 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:58Z","lastTransitionTime":"2026-02-23T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.884725 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.884788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.884804 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.884830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.884849 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:58Z","lastTransitionTime":"2026-02-23T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.987967 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.988048 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.988066 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.988090 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:58 crc kubenswrapper[4768]: I0223 18:34:58.988108 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:58Z","lastTransitionTime":"2026-02-23T18:34:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.091657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.091735 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.091758 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.091796 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.091820 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:59Z","lastTransitionTime":"2026-02-23T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.195438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.195518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.195536 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.195568 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.195586 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:59Z","lastTransitionTime":"2026-02-23T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.298506 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.298572 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.298590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.298614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.298632 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:59Z","lastTransitionTime":"2026-02-23T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.307078 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:34:59 crc kubenswrapper[4768]: E0223 18:34:59.307354 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.311790 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:56:29.299942299 +0000 UTC Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.401518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.401568 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.401578 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.401596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.401608 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:59Z","lastTransitionTime":"2026-02-23T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.505155 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.505235 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.505295 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.505325 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.505348 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:59Z","lastTransitionTime":"2026-02-23T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.609106 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.609170 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.609186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.609212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.609228 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:59Z","lastTransitionTime":"2026-02-23T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.712083 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.712222 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.712293 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.712339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.712378 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:59Z","lastTransitionTime":"2026-02-23T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.815684 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.816550 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.816593 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.816626 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.816645 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:59Z","lastTransitionTime":"2026-02-23T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.919355 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.919422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.919438 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.919454 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:34:59 crc kubenswrapper[4768]: I0223 18:34:59.919466 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:34:59Z","lastTransitionTime":"2026-02-23T18:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.022719 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.022778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.022791 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.022811 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.022823 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:00Z","lastTransitionTime":"2026-02-23T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.125286 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.125343 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.125352 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.125385 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.125395 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:00Z","lastTransitionTime":"2026-02-23T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.228123 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.228201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.228225 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.228284 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.228305 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:00Z","lastTransitionTime":"2026-02-23T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.306708 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.306754 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:00 crc kubenswrapper[4768]: E0223 18:35:00.306869 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:00 crc kubenswrapper[4768]: E0223 18:35:00.307008 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.312296 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 20:21:15.586897547 +0000 UTC Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.330771 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.330861 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.330884 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.330917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.330942 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:00Z","lastTransitionTime":"2026-02-23T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.434782 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.434849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.434872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.434906 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.434932 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:00Z","lastTransitionTime":"2026-02-23T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.538459 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.538545 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.538568 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.538597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.538619 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:00Z","lastTransitionTime":"2026-02-23T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.641873 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.642306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.642449 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.642590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.642736 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:00Z","lastTransitionTime":"2026-02-23T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.745528 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.745586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.745602 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.745626 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.745645 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:00Z","lastTransitionTime":"2026-02-23T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.848677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.848733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.848780 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.848804 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.848822 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:00Z","lastTransitionTime":"2026-02-23T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.951350 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.951402 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.951418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.951444 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:00 crc kubenswrapper[4768]: I0223 18:35:00.951460 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:00Z","lastTransitionTime":"2026-02-23T18:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.054313 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.054370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.054378 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.054396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.054408 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:01Z","lastTransitionTime":"2026-02-23T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.156036 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.156066 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.156074 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.156087 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.156098 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:01Z","lastTransitionTime":"2026-02-23T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.258907 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.258952 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.258963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.258980 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.258992 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:01Z","lastTransitionTime":"2026-02-23T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.307197 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:01 crc kubenswrapper[4768]: E0223 18:35:01.307430 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.312495 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 21:08:15.275890342 +0000 UTC Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.362793 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.362852 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.362868 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.362892 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.362906 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:01Z","lastTransitionTime":"2026-02-23T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.465917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.465994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.466012 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.466038 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.466056 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:01Z","lastTransitionTime":"2026-02-23T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.569777 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.569838 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.569849 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.569872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.569884 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:01Z","lastTransitionTime":"2026-02-23T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.672974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.673022 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.673036 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.673055 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.673067 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:01Z","lastTransitionTime":"2026-02-23T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.775877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.775929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.775948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.775974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.775993 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:01Z","lastTransitionTime":"2026-02-23T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.879478 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.879580 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.879618 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.879650 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.879673 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:01Z","lastTransitionTime":"2026-02-23T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.982954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.983017 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.983034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.983068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:01 crc kubenswrapper[4768]: I0223 18:35:01.983086 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:01Z","lastTransitionTime":"2026-02-23T18:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.086333 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.086386 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.086395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.086417 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.086432 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:02Z","lastTransitionTime":"2026-02-23T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.189636 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.189721 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.189742 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.189772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.189794 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:02Z","lastTransitionTime":"2026-02-23T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.293866 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.293956 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.293975 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.294007 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.294027 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:02Z","lastTransitionTime":"2026-02-23T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.307435 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.307462 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:02 crc kubenswrapper[4768]: E0223 18:35:02.307646 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:02 crc kubenswrapper[4768]: E0223 18:35:02.307799 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.313691 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 00:30:06.181386836 +0000 UTC Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.397785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.397853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.397873 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.397898 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.397921 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:02Z","lastTransitionTime":"2026-02-23T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.501942 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.502014 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.502034 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.502064 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.502104 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:02Z","lastTransitionTime":"2026-02-23T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.604656 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.604707 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.604723 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.604745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.604762 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:02Z","lastTransitionTime":"2026-02-23T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.707567 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.707642 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.707660 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.707685 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.707703 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:02Z","lastTransitionTime":"2026-02-23T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.811057 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.811130 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.811147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.811173 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.811196 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:02Z","lastTransitionTime":"2026-02-23T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.914199 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.914320 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.914351 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.914376 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:02 crc kubenswrapper[4768]: I0223 18:35:02.914396 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:02Z","lastTransitionTime":"2026-02-23T18:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.017290 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.017394 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.017427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.017463 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.017488 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:03Z","lastTransitionTime":"2026-02-23T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.120045 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.120100 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.120115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.120133 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.120148 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:03Z","lastTransitionTime":"2026-02-23T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.223410 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.223473 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.223495 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.223523 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.223544 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:03Z","lastTransitionTime":"2026-02-23T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.307347 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:03 crc kubenswrapper[4768]: E0223 18:35:03.307507 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.313969 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:03:16.040909803 +0000 UTC Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.326149 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.326211 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.326235 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.326299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.326323 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:03Z","lastTransitionTime":"2026-02-23T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.428774 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.428830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.428855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.428884 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.428907 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:03Z","lastTransitionTime":"2026-02-23T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.531119 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.531234 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.531327 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.531365 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.531389 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:03Z","lastTransitionTime":"2026-02-23T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.635072 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.635138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.635162 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.635193 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.635220 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:03Z","lastTransitionTime":"2026-02-23T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.738134 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.738206 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.738230 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.738294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.738319 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:03Z","lastTransitionTime":"2026-02-23T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.842019 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.842118 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.842142 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.842174 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.842199 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:03Z","lastTransitionTime":"2026-02-23T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.944415 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.944486 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.944509 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.944540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:03 crc kubenswrapper[4768]: I0223 18:35:03.944563 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:03Z","lastTransitionTime":"2026-02-23T18:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.047855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.047924 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.047963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.047994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.048020 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:04Z","lastTransitionTime":"2026-02-23T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.151212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.151312 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.151331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.151357 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.151379 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:04Z","lastTransitionTime":"2026-02-23T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.254691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.254778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.254808 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.254841 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.254864 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:04Z","lastTransitionTime":"2026-02-23T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.307300 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.307383 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:04 crc kubenswrapper[4768]: E0223 18:35:04.307468 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:04 crc kubenswrapper[4768]: E0223 18:35:04.307557 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.314862 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:40:35.234333442 +0000 UTC Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.357859 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.357939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.357963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.357994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.358022 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:04Z","lastTransitionTime":"2026-02-23T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.461744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.461810 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.461826 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.461858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.461881 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:04Z","lastTransitionTime":"2026-02-23T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.565119 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.565174 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.565195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.565220 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.565236 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:04Z","lastTransitionTime":"2026-02-23T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.668697 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.668770 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.668788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.668814 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.668834 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:04Z","lastTransitionTime":"2026-02-23T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.771615 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.771702 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.771737 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.771773 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.771809 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:04Z","lastTransitionTime":"2026-02-23T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.874777 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.874855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.874875 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.874901 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.874919 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:04Z","lastTransitionTime":"2026-02-23T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.977858 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.977914 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.977931 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.977954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:04 crc kubenswrapper[4768]: I0223 18:35:04.977973 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:04Z","lastTransitionTime":"2026-02-23T18:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.080597 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.080651 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.080667 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.080690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.080707 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.183814 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.183890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.183900 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.183939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.183953 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.287106 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.287178 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.287194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.287223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.287241 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.306988 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:05 crc kubenswrapper[4768]: E0223 18:35:05.307489 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.307790 4768 scope.go:117] "RemoveContainer" containerID="70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6" Feb 23 18:35:05 crc kubenswrapper[4768]: E0223 18:35:05.308103 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.315097 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:40:40.562453403 +0000 UTC Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.331861 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.351547 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.372958 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.390371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.390425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.390439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.390465 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.390480 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.394992 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.422814 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.444752 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.466438 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.486834 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.493423 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.493477 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.493499 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.493529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.493550 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.525066 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.545603 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.596455 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.596516 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.596533 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.596557 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.596574 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.699992 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.700523 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.700701 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.700914 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.701099 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.804104 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.804153 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.804169 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.804191 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.804208 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.848750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.848802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.848818 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.848837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.848853 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: E0223 18:35:05.870582 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.875665 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.875717 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.875734 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.875754 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.875772 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: E0223 18:35:05.894702 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.899684 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.899771 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.899799 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.899833 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.899856 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: E0223 18:35:05.923178 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.928539 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.928594 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.928610 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.928633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.928651 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: E0223 18:35:05.951975 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.958111 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.958182 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.958208 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.958239 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.958300 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:05 crc kubenswrapper[4768]: E0223 18:35:05.977545 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:05Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:05 crc kubenswrapper[4768]: E0223 18:35:05.977787 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.979828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.979943 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.979963 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.979993 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:05 crc kubenswrapper[4768]: I0223 18:35:05.980013 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:05Z","lastTransitionTime":"2026-02-23T18:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.083341 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.083395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.083412 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.083435 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.083454 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:06Z","lastTransitionTime":"2026-02-23T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.185744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.185809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.185831 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.185859 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.185879 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:06Z","lastTransitionTime":"2026-02-23T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.289201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.289299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.289323 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.289351 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.289372 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:06Z","lastTransitionTime":"2026-02-23T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.306609 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:06 crc kubenswrapper[4768]: E0223 18:35:06.306764 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.306616 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:06 crc kubenswrapper[4768]: E0223 18:35:06.306958 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.316304 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 12:37:10.954663905 +0000 UTC Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.392744 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.392814 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.392836 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.392870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.392904 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:06Z","lastTransitionTime":"2026-02-23T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.495544 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.495603 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.495677 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.495711 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.495733 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:06Z","lastTransitionTime":"2026-02-23T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.598828 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.598887 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.598905 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.598929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.598945 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:06Z","lastTransitionTime":"2026-02-23T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.701889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.701948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.701965 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.701990 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.702006 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:06Z","lastTransitionTime":"2026-02-23T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.805373 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.805436 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.805458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.805487 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.805509 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:06Z","lastTransitionTime":"2026-02-23T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.908710 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.908762 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.908784 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.908809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:06 crc kubenswrapper[4768]: I0223 18:35:06.908826 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:06Z","lastTransitionTime":"2026-02-23T18:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.011739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.011794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.011811 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.011835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.011853 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:07Z","lastTransitionTime":"2026-02-23T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.113760 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.113826 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.113837 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.113863 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.113946 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:07Z","lastTransitionTime":"2026-02-23T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.216688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.216760 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.216778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.216807 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.216827 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:07Z","lastTransitionTime":"2026-02-23T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.307520 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:07 crc kubenswrapper[4768]: E0223 18:35:07.307754 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.317517 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 10:58:45.578749218 +0000 UTC Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.318951 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.318982 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.318990 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.319002 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.319013 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:07Z","lastTransitionTime":"2026-02-23T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.421822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.421879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.421895 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.421923 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.421945 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:07Z","lastTransitionTime":"2026-02-23T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.523999 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.524099 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.524115 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.524138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.524155 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:07Z","lastTransitionTime":"2026-02-23T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.627185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.627300 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.627323 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.627349 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.627365 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:07Z","lastTransitionTime":"2026-02-23T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.730631 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.730732 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.730751 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.730774 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.730793 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:07Z","lastTransitionTime":"2026-02-23T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.834283 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.834413 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.834433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.834457 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.834474 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:07Z","lastTransitionTime":"2026-02-23T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.938228 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.938305 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.938323 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.938345 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:07 crc kubenswrapper[4768]: I0223 18:35:07.938361 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:07Z","lastTransitionTime":"2026-02-23T18:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.042229 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.042328 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.042346 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.042371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.042393 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:08Z","lastTransitionTime":"2026-02-23T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.140745 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.140901 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.140966 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.141092 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:35:40.141053892 +0000 UTC m=+135.531539722 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.141096 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.141121 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.141344 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:40.141310719 +0000 UTC m=+135.531796559 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.141382 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:40.141359771 +0000 UTC m=+135.531845601 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.144804 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.144865 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.144890 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.144921 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.144944 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:08Z","lastTransitionTime":"2026-02-23T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.242475 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.242542 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.242678 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.242678 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.242707 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.242726 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.242734 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.242749 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.242816 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:40.242793151 +0000 UTC m=+135.633278981 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.242847 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:40.242829712 +0000 UTC m=+135.633315552 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.247692 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.247759 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.247788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.247818 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.247843 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:08Z","lastTransitionTime":"2026-02-23T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.307010 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.307051 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.307166 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:08 crc kubenswrapper[4768]: E0223 18:35:08.307340 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.318429 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 03:29:49.226599764 +0000 UTC Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.351041 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.351110 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.351127 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.351157 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.351174 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:08Z","lastTransitionTime":"2026-02-23T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.454513 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.454573 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.454591 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.454614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.454631 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:08Z","lastTransitionTime":"2026-02-23T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.557471 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.557570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.557596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.557625 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.557646 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:08Z","lastTransitionTime":"2026-02-23T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.661122 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.661199 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.661234 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.661308 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.661333 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:08Z","lastTransitionTime":"2026-02-23T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.763859 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.763938 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.763961 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.763990 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.764012 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:08Z","lastTransitionTime":"2026-02-23T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.795113 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-2d9sk"] Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.795759 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2d9sk" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.798622 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.799148 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.799667 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.819669 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:08Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.836070 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:08Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.856638 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:08Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.867227 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.867318 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.867339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.867366 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.867386 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:08Z","lastTransitionTime":"2026-02-23T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.875835 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:08Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.897030 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:08Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.915884 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:08Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.932911 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:08Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.947004 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:08Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.950402 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqktc\" (UniqueName: \"kubernetes.io/projected/75a70ce4-e083-4488-9538-100e05969dfd-kube-api-access-jqktc\") pod \"node-resolver-2d9sk\" (UID: \"75a70ce4-e083-4488-9538-100e05969dfd\") " pod="openshift-dns/node-resolver-2d9sk" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.950562 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/75a70ce4-e083-4488-9538-100e05969dfd-hosts-file\") pod \"node-resolver-2d9sk\" (UID: \"75a70ce4-e083-4488-9538-100e05969dfd\") " pod="openshift-dns/node-resolver-2d9sk" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.960783 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:08Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.969658 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.969695 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.969707 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.969722 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.969734 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:08Z","lastTransitionTime":"2026-02-23T18:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:08 crc kubenswrapper[4768]: I0223 18:35:08.993015 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:08Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.008407 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.051778 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/75a70ce4-e083-4488-9538-100e05969dfd-hosts-file\") pod \"node-resolver-2d9sk\" (UID: \"75a70ce4-e083-4488-9538-100e05969dfd\") " pod="openshift-dns/node-resolver-2d9sk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.051825 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqktc\" (UniqueName: \"kubernetes.io/projected/75a70ce4-e083-4488-9538-100e05969dfd-kube-api-access-jqktc\") pod \"node-resolver-2d9sk\" (UID: \"75a70ce4-e083-4488-9538-100e05969dfd\") " pod="openshift-dns/node-resolver-2d9sk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.052086 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/75a70ce4-e083-4488-9538-100e05969dfd-hosts-file\") pod \"node-resolver-2d9sk\" (UID: \"75a70ce4-e083-4488-9538-100e05969dfd\") " pod="openshift-dns/node-resolver-2d9sk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.072896 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.072952 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.072964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.072985 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.073004 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:09Z","lastTransitionTime":"2026-02-23T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.084632 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqktc\" (UniqueName: \"kubernetes.io/projected/75a70ce4-e083-4488-9538-100e05969dfd-kube-api-access-jqktc\") pod \"node-resolver-2d9sk\" (UID: \"75a70ce4-e083-4488-9538-100e05969dfd\") " pod="openshift-dns/node-resolver-2d9sk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.117700 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2d9sk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.171744 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-bvntk"] Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.173061 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.174484 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-zckb9"] Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.175120 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rcq8b"] Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.175587 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.176901 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.178927 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.179295 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.179542 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.179734 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.179892 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.179926 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.180044 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.180183 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.179546 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.180480 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.180624 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.180995 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.181917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.182106 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.182129 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.182156 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.182180 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:09Z","lastTransitionTime":"2026-02-23T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.200136 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.218186 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.240722 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.253778 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1947d9c5-33dd-4b10-8e84-e40f16a47a63-cni-binary-copy\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.253807 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2r29\" (UniqueName: \"kubernetes.io/projected/1947d9c5-33dd-4b10-8e84-e40f16a47a63-kube-api-access-x2r29\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.253824 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-cni-binary-copy\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.253841 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-os-release\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.253865 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1947d9c5-33dd-4b10-8e84-e40f16a47a63-os-release\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.253878 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-cnibin\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.253892 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-run-multus-certs\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.253914 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-multus-socket-dir-parent\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.253930 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-var-lib-cni-multus\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.253944 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1947d9c5-33dd-4b10-8e84-e40f16a47a63-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254009 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1947d9c5-33dd-4b10-8e84-e40f16a47a63-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254081 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-run-netns\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254115 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-hostroot\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254143 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-multus-conf-dir\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254174 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfc2m\" (UniqueName: \"kubernetes.io/projected/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-kube-api-access-lfc2m\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254207 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed422723-0e38-45bc-a0d9-c4c51d3f2dc7-mcd-auth-proxy-config\") pod \"machine-config-daemon-zckb9\" (UID: \"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\") " pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254275 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4q7t\" (UniqueName: \"kubernetes.io/projected/ed422723-0e38-45bc-a0d9-c4c51d3f2dc7-kube-api-access-t4q7t\") pod \"machine-config-daemon-zckb9\" (UID: \"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\") " pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254320 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-system-cni-dir\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254352 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-var-lib-cni-bin\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254381 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1947d9c5-33dd-4b10-8e84-e40f16a47a63-cnibin\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254412 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-multus-cni-dir\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254443 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-run-k8s-cni-cncf-io\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254475 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-var-lib-kubelet\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254506 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-etc-kubernetes\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254567 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-multus-daemon-config\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254587 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1947d9c5-33dd-4b10-8e84-e40f16a47a63-system-cni-dir\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254602 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ed422723-0e38-45bc-a0d9-c4c51d3f2dc7-rootfs\") pod \"machine-config-daemon-zckb9\" (UID: \"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\") " pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.254620 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed422723-0e38-45bc-a0d9-c4c51d3f2dc7-proxy-tls\") pod \"machine-config-daemon-zckb9\" (UID: \"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\") " pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.259485 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.274757 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.284889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.284929 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.284941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.284958 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.284970 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:09Z","lastTransitionTime":"2026-02-23T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.288366 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.300158 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.307143 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:09 crc kubenswrapper[4768]: E0223 18:35:09.307335 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.308544 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.319281 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 12:32:57.212576351 +0000 UTC Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.335789 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.346843 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.355054 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1947d9c5-33dd-4b10-8e84-e40f16a47a63-cnibin\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.355091 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-multus-cni-dir\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.355111 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-run-k8s-cni-cncf-io\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.355134 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-var-lib-kubelet\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.355156 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-etc-kubernetes\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.355151 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1947d9c5-33dd-4b10-8e84-e40f16a47a63-cnibin\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.355218 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-multus-cni-dir\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.355223 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-run-k8s-cni-cncf-io\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.355280 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-var-lib-kubelet\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.355314 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-etc-kubernetes\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.355881 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-multus-daemon-config\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.356083 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1947d9c5-33dd-4b10-8e84-e40f16a47a63-system-cni-dir\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.356448 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1947d9c5-33dd-4b10-8e84-e40f16a47a63-system-cni-dir\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.356533 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ed422723-0e38-45bc-a0d9-c4c51d3f2dc7-rootfs\") pod \"machine-config-daemon-zckb9\" (UID: \"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\") " pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.356635 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ed422723-0e38-45bc-a0d9-c4c51d3f2dc7-rootfs\") pod \"machine-config-daemon-zckb9\" (UID: \"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\") " pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.356565 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed422723-0e38-45bc-a0d9-c4c51d3f2dc7-proxy-tls\") pod \"machine-config-daemon-zckb9\" (UID: \"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\") " pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.356729 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1947d9c5-33dd-4b10-8e84-e40f16a47a63-cni-binary-copy\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357416 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1947d9c5-33dd-4b10-8e84-e40f16a47a63-cni-binary-copy\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.356758 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-multus-daemon-config\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357420 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2r29\" (UniqueName: \"kubernetes.io/projected/1947d9c5-33dd-4b10-8e84-e40f16a47a63-kube-api-access-x2r29\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357503 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-cni-binary-copy\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357570 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-os-release\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357635 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1947d9c5-33dd-4b10-8e84-e40f16a47a63-os-release\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357668 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-cnibin\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357710 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-os-release\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357729 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-run-multus-certs\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357758 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-run-multus-certs\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357771 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-cnibin\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357843 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-multus-socket-dir-parent\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357904 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-var-lib-cni-multus\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357919 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1947d9c5-33dd-4b10-8e84-e40f16a47a63-os-release\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357929 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1947d9c5-33dd-4b10-8e84-e40f16a47a63-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357980 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-multus-socket-dir-parent\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.357988 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1947d9c5-33dd-4b10-8e84-e40f16a47a63-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358012 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-run-netns\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358076 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-hostroot\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358152 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-multus-conf-dir\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358179 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfc2m\" (UniqueName: \"kubernetes.io/projected/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-kube-api-access-lfc2m\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358301 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4q7t\" (UniqueName: \"kubernetes.io/projected/ed422723-0e38-45bc-a0d9-c4c51d3f2dc7-kube-api-access-t4q7t\") pod \"machine-config-daemon-zckb9\" (UID: \"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\") " pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358345 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-system-cni-dir\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358442 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed422723-0e38-45bc-a0d9-c4c51d3f2dc7-mcd-auth-proxy-config\") pod \"machine-config-daemon-zckb9\" (UID: \"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\") " pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358524 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-var-lib-cni-bin\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358586 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-hostroot\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358642 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-cni-binary-copy\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358658 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-var-lib-cni-bin\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358020 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-var-lib-cni-multus\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358785 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-host-run-netns\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358913 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-multus-conf-dir\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.358967 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-system-cni-dir\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.359537 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ed422723-0e38-45bc-a0d9-c4c51d3f2dc7-mcd-auth-proxy-config\") pod \"machine-config-daemon-zckb9\" (UID: \"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\") " pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.359674 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1947d9c5-33dd-4b10-8e84-e40f16a47a63-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.360067 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1947d9c5-33dd-4b10-8e84-e40f16a47a63-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.360050 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.363725 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ed422723-0e38-45bc-a0d9-c4c51d3f2dc7-proxy-tls\") pod \"machine-config-daemon-zckb9\" (UID: \"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\") " pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.376087 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.378478 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4q7t\" (UniqueName: \"kubernetes.io/projected/ed422723-0e38-45bc-a0d9-c4c51d3f2dc7-kube-api-access-t4q7t\") pod \"machine-config-daemon-zckb9\" (UID: \"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\") " pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.380016 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2r29\" (UniqueName: \"kubernetes.io/projected/1947d9c5-33dd-4b10-8e84-e40f16a47a63-kube-api-access-x2r29\") pod \"multus-additional-cni-plugins-bvntk\" (UID: \"1947d9c5-33dd-4b10-8e84-e40f16a47a63\") " pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.380420 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfc2m\" (UniqueName: \"kubernetes.io/projected/1c7d1a60-c63e-4279-9ce9-4eea677d4a70-kube-api-access-lfc2m\") pod \"multus-rcq8b\" (UID: \"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\") " pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.387994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.388023 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.388032 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.388047 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.388057 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:09Z","lastTransitionTime":"2026-02-23T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.397094 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.407769 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.418564 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.431169 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.447897 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.462126 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.477848 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.490101 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.490143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.490158 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.490206 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.490223 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:09Z","lastTransitionTime":"2026-02-23T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.492330 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.506593 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.518504 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bvntk" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.523093 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: W0223 18:35:09.529906 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1947d9c5_33dd_4b10_8e84_e40f16a47a63.slice/crio-ff36384902f382043d04732ed107e0c2c0758069aca67b828de32c2009a600cb WatchSource:0}: Error finding container ff36384902f382043d04732ed107e0c2c0758069aca67b828de32c2009a600cb: Status 404 returned error can't find the container with id ff36384902f382043d04732ed107e0c2c0758069aca67b828de32c2009a600cb Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.534668 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rcq8b" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.544041 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.544148 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: W0223 18:35:09.552952 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c7d1a60_c63e_4279_9ce9_4eea677d4a70.slice/crio-aebc20c90b3d4d06e1a6211141f12f4158dcaa3cf9baf710567cf1ba6d0e7909 WatchSource:0}: Error finding container aebc20c90b3d4d06e1a6211141f12f4158dcaa3cf9baf710567cf1ba6d0e7909: Status 404 returned error can't find the container with id aebc20c90b3d4d06e1a6211141f12f4158dcaa3cf9baf710567cf1ba6d0e7909 Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.560852 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: W0223 18:35:09.564218 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded422723_0e38_45bc_a0d9_c4c51d3f2dc7.slice/crio-ce3cd50078c05b941abce77c1cdba573ab9f1d8ca56f2baf75130f5bae8ffe84 WatchSource:0}: Error finding container ce3cd50078c05b941abce77c1cdba573ab9f1d8ca56f2baf75130f5bae8ffe84: Status 404 returned error can't find the container with id ce3cd50078c05b941abce77c1cdba573ab9f1d8ca56f2baf75130f5bae8ffe84 Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.574052 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbxnc"] Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.577582 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.580884 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.581153 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.581363 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.581543 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.581597 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.581800 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.581806 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.582045 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.592899 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.592948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.592962 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.592989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.593003 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:09Z","lastTransitionTime":"2026-02-23T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.600856 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.623082 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.638942 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.654160 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661084 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-var-lib-openvswitch\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661155 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovnkube-script-lib\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661200 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-run-netns\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661337 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661378 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-systemd-units\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661444 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-etc-openvswitch\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661494 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-cni-bin\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661535 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-systemd\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661570 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661605 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-openvswitch\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661638 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-node-log\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661671 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-env-overrides\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661703 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxnx4\" (UniqueName: \"kubernetes.io/projected/dfa4db1d-97c7-44ee-be87-27167edeb9a9-kube-api-access-gxnx4\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661754 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-kubelet\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661786 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-log-socket\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661819 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovnkube-config\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661861 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovn-node-metrics-cert\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661896 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-ovn\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661930 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-cni-netd\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.661995 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-slash\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.669030 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.687182 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.698210 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.698755 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.698767 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.698788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.698801 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:09Z","lastTransitionTime":"2026-02-23T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.702898 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.718750 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.735668 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.746718 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.760154 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762528 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-var-lib-openvswitch\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762566 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovnkube-script-lib\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762592 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-run-netns\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762619 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762637 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-var-lib-openvswitch\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762645 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-systemd-units\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762678 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-run-netns\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762685 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-etc-openvswitch\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762704 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762708 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-cni-bin\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762728 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-systemd\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762749 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-etc-openvswitch\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762750 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762779 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762779 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-openvswitch\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762815 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-cni-bin\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762728 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-systemd-units\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762830 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-node-log\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762851 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-openvswitch\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762857 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-env-overrides\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762880 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxnx4\" (UniqueName: \"kubernetes.io/projected/dfa4db1d-97c7-44ee-be87-27167edeb9a9-kube-api-access-gxnx4\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762901 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-node-log\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762935 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-kubelet\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762911 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-kubelet\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762973 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-log-socket\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762995 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovnkube-config\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.763017 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-log-socket\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.763019 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovn-node-metrics-cert\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.763049 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-ovn\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.763064 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-cni-netd\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.762877 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-systemd\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.763108 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-slash\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.763133 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-cni-netd\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.763091 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-slash\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.763189 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-ovn\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.767693 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovn-node-metrics-cert\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.767943 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovnkube-script-lib\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.770706 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovnkube-config\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.770708 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-env-overrides\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.782369 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.785036 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxnx4\" (UniqueName: \"kubernetes.io/projected/dfa4db1d-97c7-44ee-be87-27167edeb9a9-kube-api-access-gxnx4\") pod \"ovnkube-node-nbxnc\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.787332 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" event={"ID":"1947d9c5-33dd-4b10-8e84-e40f16a47a63","Type":"ContainerStarted","Data":"e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.787423 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" event={"ID":"1947d9c5-33dd-4b10-8e84-e40f16a47a63","Type":"ContainerStarted","Data":"ff36384902f382043d04732ed107e0c2c0758069aca67b828de32c2009a600cb"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.788748 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2d9sk" event={"ID":"75a70ce4-e083-4488-9538-100e05969dfd","Type":"ContainerStarted","Data":"d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.788813 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2d9sk" event={"ID":"75a70ce4-e083-4488-9538-100e05969dfd","Type":"ContainerStarted","Data":"d2a993958eee6db422f4b8111296ff18c1868a0f907486003830028aa67ec753"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.791368 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.791402 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.791417 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"ce3cd50078c05b941abce77c1cdba573ab9f1d8ca56f2baf75130f5bae8ffe84"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.794697 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rcq8b" event={"ID":"1c7d1a60-c63e-4279-9ce9-4eea677d4a70","Type":"ContainerStarted","Data":"f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.794741 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rcq8b" event={"ID":"1c7d1a60-c63e-4279-9ce9-4eea677d4a70","Type":"ContainerStarted","Data":"aebc20c90b3d4d06e1a6211141f12f4158dcaa3cf9baf710567cf1ba6d0e7909"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.802077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.801976 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.802138 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.802161 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.802184 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.802200 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:09Z","lastTransitionTime":"2026-02-23T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.818239 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.831166 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.852749 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.866811 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.878510 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.893232 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.904923 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.904969 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.904980 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.904996 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.905010 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:09Z","lastTransitionTime":"2026-02-23T18:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:09 crc kubenswrapper[4768]: W0223 18:35:09.912427 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfa4db1d_97c7_44ee_be87_27167edeb9a9.slice/crio-b280c1b893b47f2c87e37a7a37bd6f20531d139c139801c2cbb3c74e00bdd307 WatchSource:0}: Error finding container b280c1b893b47f2c87e37a7a37bd6f20531d139c139801c2cbb3c74e00bdd307: Status 404 returned error can't find the container with id b280c1b893b47f2c87e37a7a37bd6f20531d139c139801c2cbb3c74e00bdd307 Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.924810 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.957611 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.971455 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.984311 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:09 crc kubenswrapper[4768]: I0223 18:35:09.995911 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:09Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.007867 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.007924 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.007956 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.007968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.007989 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.008004 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:10Z","lastTransitionTime":"2026-02-23T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.028106 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.041507 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.054656 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.066279 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.079534 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.095945 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.109660 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.110300 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.110332 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.110340 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.110353 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.110362 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:10Z","lastTransitionTime":"2026-02-23T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.213465 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.213529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.213549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.213576 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.213598 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:10Z","lastTransitionTime":"2026-02-23T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.307195 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.307195 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:10 crc kubenswrapper[4768]: E0223 18:35:10.307401 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:10 crc kubenswrapper[4768]: E0223 18:35:10.307487 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.875216 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:02:43.528520742 +0000 UTC Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.878747 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.878796 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.878834 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.878857 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.878874 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:10Z","lastTransitionTime":"2026-02-23T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.888974 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" event={"ID":"1947d9c5-33dd-4b10-8e84-e40f16a47a63","Type":"ContainerDied","Data":"e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694"} Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.889325 4768 generic.go:334] "Generic (PLEG): container finished" podID="1947d9c5-33dd-4b10-8e84-e40f16a47a63" containerID="e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694" exitCode=0 Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.892423 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerID="932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e" exitCode=0 Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.892506 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerDied","Data":"932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e"} Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.892589 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerStarted","Data":"b280c1b893b47f2c87e37a7a37bd6f20531d139c139801c2cbb3c74e00bdd307"} Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.904067 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.928313 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.944900 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.961734 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.976949 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.981441 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.981470 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.981479 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.981494 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.981504 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:10Z","lastTransitionTime":"2026-02-23T18:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:10 crc kubenswrapper[4768]: I0223 18:35:10.994786 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:10Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.016761 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.042442 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.053921 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.067670 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.079811 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.084683 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.084720 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.084730 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.084745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.084755 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:11Z","lastTransitionTime":"2026-02-23T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.096386 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.109976 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.123521 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.140642 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.153783 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.168768 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.182603 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.186599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.186640 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.186653 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.186671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.186682 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:11Z","lastTransitionTime":"2026-02-23T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.197443 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.208367 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.220097 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.237884 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.254499 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.268107 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.282791 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.289085 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.289128 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.289143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.289163 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.289177 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:11Z","lastTransitionTime":"2026-02-23T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.304797 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.307348 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:11 crc kubenswrapper[4768]: E0223 18:35:11.307500 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.325210 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.340068 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.350885 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.361086 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.391554 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.391584 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.391592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.391604 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.391612 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:11Z","lastTransitionTime":"2026-02-23T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.494590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.494630 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.494641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.494657 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.494670 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:11Z","lastTransitionTime":"2026-02-23T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.598158 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.598201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.598212 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.598228 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.598269 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:11Z","lastTransitionTime":"2026-02-23T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.701100 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.701166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.701204 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.701311 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.701345 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:11Z","lastTransitionTime":"2026-02-23T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.805300 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.805668 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.805681 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.805699 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.805711 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:11Z","lastTransitionTime":"2026-02-23T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.875501 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:19:51.86659481 +0000 UTC Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.904165 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerStarted","Data":"bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.904280 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerStarted","Data":"1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.904298 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerStarted","Data":"6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.904312 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerStarted","Data":"f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.908017 4768 generic.go:334] "Generic (PLEG): container finished" podID="1947d9c5-33dd-4b10-8e84-e40f16a47a63" containerID="3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f" exitCode=0 Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.908095 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" event={"ID":"1947d9c5-33dd-4b10-8e84-e40f16a47a63","Type":"ContainerDied","Data":"3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.909025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.909068 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.909085 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.909109 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.909128 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:11Z","lastTransitionTime":"2026-02-23T18:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.927830 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.945201 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.958925 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.976753 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:11 crc kubenswrapper[4768]: I0223 18:35:11.999639 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:11Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.012879 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:12Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.013456 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.013507 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.013519 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.013540 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.013555 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:12Z","lastTransitionTime":"2026-02-23T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.026652 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:12Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.041744 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:12Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.063466 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:12Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.077521 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:12Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.092344 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:12Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.112987 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:12Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.119378 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.119411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.119419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.119434 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.119445 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:12Z","lastTransitionTime":"2026-02-23T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.131513 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:12Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.158282 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:12Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.172548 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:12Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.222568 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.222631 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.222646 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.222671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.222687 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:12Z","lastTransitionTime":"2026-02-23T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.307617 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.307705 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:12 crc kubenswrapper[4768]: E0223 18:35:12.307803 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:12 crc kubenswrapper[4768]: E0223 18:35:12.307951 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.325477 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.325523 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.325536 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.325553 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.325563 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:12Z","lastTransitionTime":"2026-02-23T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.428750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.428826 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.428848 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.428879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.428901 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:12Z","lastTransitionTime":"2026-02-23T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.532026 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.532098 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.532136 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.532171 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.532195 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:12Z","lastTransitionTime":"2026-02-23T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.639941 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.640109 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.640134 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.640195 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.640514 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:12Z","lastTransitionTime":"2026-02-23T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.744511 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.744605 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.744643 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.744673 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.744695 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:12Z","lastTransitionTime":"2026-02-23T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.847830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.847910 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.847927 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.847951 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.847969 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:12Z","lastTransitionTime":"2026-02-23T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.876023 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:05:54.082183788 +0000 UTC Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.915820 4768 generic.go:334] "Generic (PLEG): container finished" podID="1947d9c5-33dd-4b10-8e84-e40f16a47a63" containerID="80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f" exitCode=0 Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.915929 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" event={"ID":"1947d9c5-33dd-4b10-8e84-e40f16a47a63","Type":"ContainerDied","Data":"80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f"} Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.923682 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerStarted","Data":"8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c"} Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.923741 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerStarted","Data":"e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53"} Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.939612 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:12Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.950900 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.950931 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.950942 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.950959 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.950971 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:12Z","lastTransitionTime":"2026-02-23T18:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.963579 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:12Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:12 crc kubenswrapper[4768]: I0223 18:35:12.982185 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:12Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.011498 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.034691 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.048235 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.053560 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.053604 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.053614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.053631 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.053643 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:13Z","lastTransitionTime":"2026-02-23T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.063657 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.078484 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.096525 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.108773 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.121165 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.136865 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.149557 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.156295 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.156397 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.157127 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.157705 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.157784 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:13Z","lastTransitionTime":"2026-02-23T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.159352 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.180033 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.261348 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.261396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.261411 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.261433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.261448 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:13Z","lastTransitionTime":"2026-02-23T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.307349 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:13 crc kubenswrapper[4768]: E0223 18:35:13.307608 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.364785 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.364860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.364881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.364913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.364933 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:13Z","lastTransitionTime":"2026-02-23T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.468964 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.469036 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.469052 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.469074 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.469090 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:13Z","lastTransitionTime":"2026-02-23T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.573039 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.573104 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.573122 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.573149 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.573175 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:13Z","lastTransitionTime":"2026-02-23T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.676011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.676065 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.676081 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.676105 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.676122 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:13Z","lastTransitionTime":"2026-02-23T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.779775 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.779834 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.779851 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.779875 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.779893 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:13Z","lastTransitionTime":"2026-02-23T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.876958 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:33:06.930285484 +0000 UTC Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.882433 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.882521 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.882550 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.882577 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.882595 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:13Z","lastTransitionTime":"2026-02-23T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.931489 4768 generic.go:334] "Generic (PLEG): container finished" podID="1947d9c5-33dd-4b10-8e84-e40f16a47a63" containerID="fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164" exitCode=0 Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.931569 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" event={"ID":"1947d9c5-33dd-4b10-8e84-e40f16a47a63","Type":"ContainerDied","Data":"fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164"} Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.959395 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.980444 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:13Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.986622 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.986778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.987297 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.987369 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:13 crc kubenswrapper[4768]: I0223 18:35:13.987387 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:13Z","lastTransitionTime":"2026-02-23T18:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.008919 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.025651 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.043182 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.064473 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.081790 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.090739 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.090822 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.090845 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.090878 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.090904 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:14Z","lastTransitionTime":"2026-02-23T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.097119 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.113064 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.127810 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.149015 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.180060 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.193530 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.193586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.193604 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.193628 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.193655 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:14Z","lastTransitionTime":"2026-02-23T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.196845 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.214431 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.239631 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.296176 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.296238 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.296271 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.296295 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.296309 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:14Z","lastTransitionTime":"2026-02-23T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.306994 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.306994 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:14 crc kubenswrapper[4768]: E0223 18:35:14.307135 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:14 crc kubenswrapper[4768]: E0223 18:35:14.307239 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.400299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.400375 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.400396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.400427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.400449 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:14Z","lastTransitionTime":"2026-02-23T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.506227 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.506299 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.506310 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.506328 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.506339 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:14Z","lastTransitionTime":"2026-02-23T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.610422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.610496 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.610518 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.610546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.610571 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:14Z","lastTransitionTime":"2026-02-23T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.713806 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.713855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.713868 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.713887 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.713900 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:14Z","lastTransitionTime":"2026-02-23T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.848425 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.848523 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.848548 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.848586 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.848634 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:14Z","lastTransitionTime":"2026-02-23T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.877671 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:23:58.752729902 +0000 UTC Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.940740 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerStarted","Data":"bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721"} Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.945198 4768 generic.go:334] "Generic (PLEG): container finished" podID="1947d9c5-33dd-4b10-8e84-e40f16a47a63" containerID="0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69" exitCode=0 Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.945281 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" event={"ID":"1947d9c5-33dd-4b10-8e84-e40f16a47a63","Type":"ContainerDied","Data":"0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69"} Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.950633 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.950683 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.950700 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.950724 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.950736 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:14Z","lastTransitionTime":"2026-02-23T18:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:14 crc kubenswrapper[4768]: I0223 18:35:14.984836 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:14Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.007207 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.022015 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.035084 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.051070 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.053280 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.053319 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.053335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.053358 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.053373 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:15Z","lastTransitionTime":"2026-02-23T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.069372 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.082788 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.098121 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.112355 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.123801 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.140168 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.155422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.155458 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.155469 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.155484 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.155495 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:15Z","lastTransitionTime":"2026-02-23T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.157471 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.168784 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.180016 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.197295 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.258202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.258254 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.258267 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.258282 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.258293 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:15Z","lastTransitionTime":"2026-02-23T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.306668 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:15 crc kubenswrapper[4768]: E0223 18:35:15.306871 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.321336 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.339026 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.353503 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.361139 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.361197 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.361214 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.361237 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.361284 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:15Z","lastTransitionTime":"2026-02-23T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.377968 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.398034 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.422784 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.436156 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.450062 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.462893 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.463805 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.463946 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.463968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.463995 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.464014 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:15Z","lastTransitionTime":"2026-02-23T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.479223 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.492834 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.505321 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.519530 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.541944 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.560804 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.566599 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.566649 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.566664 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.566686 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.566700 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:15Z","lastTransitionTime":"2026-02-23T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.669860 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.669925 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.669948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.669973 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.669991 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:15Z","lastTransitionTime":"2026-02-23T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.772990 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.773070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.773089 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.773116 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.773134 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:15Z","lastTransitionTime":"2026-02-23T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.875907 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.875968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.875988 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.876020 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.876041 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:15Z","lastTransitionTime":"2026-02-23T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.878434 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:23:13.270513181 +0000 UTC Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.955460 4768 generic.go:334] "Generic (PLEG): container finished" podID="1947d9c5-33dd-4b10-8e84-e40f16a47a63" containerID="1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23" exitCode=0 Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.955518 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" event={"ID":"1947d9c5-33dd-4b10-8e84-e40f16a47a63","Type":"ContainerDied","Data":"1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23"} Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.978688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.978750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.978766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.978788 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.978804 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:15Z","lastTransitionTime":"2026-02-23T18:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:15 crc kubenswrapper[4768]: I0223 18:35:15.984189 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:15Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.005779 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.032369 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.048706 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.068482 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.082063 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.082110 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.082126 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.082151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.082169 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.086178 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.099916 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.120537 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.137089 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.149646 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.166742 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.186279 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.186354 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.186369 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.186394 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.186409 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.186447 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.188292 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.188370 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.188385 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.188401 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.188414 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.204556 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: E0223 18:35:16.207858 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.213269 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.213462 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.213623 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.213652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.213687 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.222164 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-hqtsz"] Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.222569 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hqtsz" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.224829 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.224833 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.225553 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.225677 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.229828 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: E0223 18:35:16.232271 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.238215 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.238266 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.238278 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.238302 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.238320 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.244170 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: E0223 18:35:16.254800 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.259141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.259174 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.259185 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.259201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.259212 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.259841 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.269319 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb748656-160a-49c5-a1b0-ce5949aaf631-host\") pod \"node-ca-hqtsz\" (UID: \"cb748656-160a-49c5-a1b0-ce5949aaf631\") " pod="openshift-image-registry/node-ca-hqtsz" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.269429 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvggb\" (UniqueName: \"kubernetes.io/projected/cb748656-160a-49c5-a1b0-ce5949aaf631-kube-api-access-rvggb\") pod \"node-ca-hqtsz\" (UID: \"cb748656-160a-49c5-a1b0-ce5949aaf631\") " pod="openshift-image-registry/node-ca-hqtsz" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.269486 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cb748656-160a-49c5-a1b0-ce5949aaf631-serviceca\") pod \"node-ca-hqtsz\" (UID: \"cb748656-160a-49c5-a1b0-ce5949aaf631\") " pod="openshift-image-registry/node-ca-hqtsz" Feb 23 18:35:16 crc kubenswrapper[4768]: E0223 18:35:16.273163 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.274345 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.278608 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.278641 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.278653 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.278671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.278683 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.292128 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: E0223 18:35:16.293766 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: E0223 18:35:16.293929 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.295439 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.295483 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.295495 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.295514 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.295531 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.306585 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.306612 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:16 crc kubenswrapper[4768]: E0223 18:35:16.306730 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:16 crc kubenswrapper[4768]: E0223 18:35:16.306950 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.307724 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.327784 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.338388 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.361309 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.370855 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb748656-160a-49c5-a1b0-ce5949aaf631-host\") pod \"node-ca-hqtsz\" (UID: \"cb748656-160a-49c5-a1b0-ce5949aaf631\") " pod="openshift-image-registry/node-ca-hqtsz" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.370928 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvggb\" (UniqueName: \"kubernetes.io/projected/cb748656-160a-49c5-a1b0-ce5949aaf631-kube-api-access-rvggb\") pod \"node-ca-hqtsz\" (UID: \"cb748656-160a-49c5-a1b0-ce5949aaf631\") " pod="openshift-image-registry/node-ca-hqtsz" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.370952 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cb748656-160a-49c5-a1b0-ce5949aaf631-serviceca\") pod \"node-ca-hqtsz\" (UID: \"cb748656-160a-49c5-a1b0-ce5949aaf631\") " pod="openshift-image-registry/node-ca-hqtsz" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.371157 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb748656-160a-49c5-a1b0-ce5949aaf631-host\") pod \"node-ca-hqtsz\" (UID: \"cb748656-160a-49c5-a1b0-ce5949aaf631\") " pod="openshift-image-registry/node-ca-hqtsz" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.372026 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cb748656-160a-49c5-a1b0-ce5949aaf631-serviceca\") pod \"node-ca-hqtsz\" (UID: \"cb748656-160a-49c5-a1b0-ce5949aaf631\") " pod="openshift-image-registry/node-ca-hqtsz" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.402409 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.402442 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.402452 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.402485 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.402497 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.408650 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvggb\" (UniqueName: \"kubernetes.io/projected/cb748656-160a-49c5-a1b0-ce5949aaf631-kube-api-access-rvggb\") pod \"node-ca-hqtsz\" (UID: \"cb748656-160a-49c5-a1b0-ce5949aaf631\") " pod="openshift-image-registry/node-ca-hqtsz" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.410607 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.447688 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.460364 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.482299 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.494339 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.505223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.505264 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.505272 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.505284 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.505292 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.517058 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.527600 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.537086 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hqtsz" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.542525 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: W0223 18:35:16.550747 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb748656_160a_49c5_a1b0_ce5949aaf631.slice/crio-0800f6c81e2549da2aaa5ebb658960cc59c2548b460d97fc29d21c754657070d WatchSource:0}: Error finding container 0800f6c81e2549da2aaa5ebb658960cc59c2548b460d97fc29d21c754657070d: Status 404 returned error can't find the container with id 0800f6c81e2549da2aaa5ebb658960cc59c2548b460d97fc29d21c754657070d Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.553050 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.610800 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.610842 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.610855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.610872 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.610883 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.713491 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.713546 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.713555 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.713570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.713579 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.815789 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.815819 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.815830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.815846 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.815858 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.878912 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 13:29:41.102243582 +0000 UTC Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.918609 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.918671 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.918685 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.918704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.918736 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:16Z","lastTransitionTime":"2026-02-23T18:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.962925 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" event={"ID":"1947d9c5-33dd-4b10-8e84-e40f16a47a63","Type":"ContainerStarted","Data":"4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.971604 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerStarted","Data":"694539c5fcb0c75aba10efd5e74fd9edbdef2a4a3a2c73e23c2abb1ef5e290bc"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.971888 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.971913 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.971925 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.974013 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hqtsz" event={"ID":"cb748656-160a-49c5-a1b0-ce5949aaf631","Type":"ContainerStarted","Data":"0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.974066 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hqtsz" event={"ID":"cb748656-160a-49c5-a1b0-ce5949aaf631","Type":"ContainerStarted","Data":"0800f6c81e2549da2aaa5ebb658960cc59c2548b460d97fc29d21c754657070d"} Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.978937 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:16 crc kubenswrapper[4768]: I0223 18:35:16.996114 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:16Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.012898 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.016871 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.017788 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.021529 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.021632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.021710 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.021802 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.021877 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:17Z","lastTransitionTime":"2026-02-23T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.029511 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.040954 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.052730 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.073519 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.091888 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.114743 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.126306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.126419 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.126442 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.126502 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.126529 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:17Z","lastTransitionTime":"2026-02-23T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.131294 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.155199 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.169359 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.193789 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.213526 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.230889 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.230937 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.230985 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.231009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.231026 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:17Z","lastTransitionTime":"2026-02-23T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.235760 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.254097 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.277369 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.299165 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.307405 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:17 crc kubenswrapper[4768]: E0223 18:35:17.307605 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.319083 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.334855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.334915 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.334938 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.334966 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.334991 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:17Z","lastTransitionTime":"2026-02-23T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.347928 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://694539c5fcb0c75aba10efd5e74fd9edbdef2a4a3a2c73e23c2abb1ef5e290bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.374618 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.395666 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.419791 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.437795 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.437870 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.437894 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.437928 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.437951 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:17Z","lastTransitionTime":"2026-02-23T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.439136 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.454205 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.472863 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.490225 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.511364 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.529748 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.540290 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.540356 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.540378 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.540407 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.540425 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:17Z","lastTransitionTime":"2026-02-23T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.546013 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.560756 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.580652 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:17Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.644182 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.644224 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.644233 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.644262 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.644274 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:17Z","lastTransitionTime":"2026-02-23T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.747612 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.747672 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.747688 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.747712 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.747728 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:17Z","lastTransitionTime":"2026-02-23T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.851412 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.851488 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.851512 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.851542 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.851563 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:17Z","lastTransitionTime":"2026-02-23T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.879730 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:25:19.474693802 +0000 UTC Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.954683 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.954733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.954750 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.954773 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:17 crc kubenswrapper[4768]: I0223 18:35:17.954790 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:17Z","lastTransitionTime":"2026-02-23T18:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.058141 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.058194 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.058213 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.058237 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.058288 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:18Z","lastTransitionTime":"2026-02-23T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.165044 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.165293 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.165356 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.165417 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.165502 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:18Z","lastTransitionTime":"2026-02-23T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.268296 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.268341 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.268355 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.268377 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.268392 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:18Z","lastTransitionTime":"2026-02-23T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.306890 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.306935 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:18 crc kubenswrapper[4768]: E0223 18:35:18.307113 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:18 crc kubenswrapper[4768]: E0223 18:35:18.307228 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.308144 4768 scope.go:117] "RemoveContainer" containerID="70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.371343 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.371427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.371456 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.371491 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.371517 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:18Z","lastTransitionTime":"2026-02-23T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.473887 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.473925 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.473934 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.473948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.473957 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:18Z","lastTransitionTime":"2026-02-23T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.576570 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.576614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.576624 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.576647 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.576656 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:18Z","lastTransitionTime":"2026-02-23T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.679690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.679763 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.679776 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.679801 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.679816 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:18Z","lastTransitionTime":"2026-02-23T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.783588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.783652 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.783665 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.783691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.783706 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:18Z","lastTransitionTime":"2026-02-23T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.880115 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:34:55.405804764 +0000 UTC Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.890659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.890703 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.890717 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.890737 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.890753 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:18Z","lastTransitionTime":"2026-02-23T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.981473 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.983775 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645"} Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.984460 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.994907 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.994938 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.994947 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.994960 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.994970 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:18Z","lastTransitionTime":"2026-02-23T18:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:18 crc kubenswrapper[4768]: I0223 18:35:18.997133 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:18Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.007465 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.022598 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.035782 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.046615 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.060336 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.079205 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://694539c5fcb0c75aba10efd5e74fd9edbdef2a4a3a2c73e23c2abb1ef5e290bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.097523 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.097565 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.097576 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.097592 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.097603 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:19Z","lastTransitionTime":"2026-02-23T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.109222 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.128544 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.143217 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.160364 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.174814 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.189235 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.199580 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.199614 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.199623 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.199638 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.199651 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:19Z","lastTransitionTime":"2026-02-23T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.205406 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.217995 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.234391 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:19Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.301939 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.301985 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.301994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.302009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.302019 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:19Z","lastTransitionTime":"2026-02-23T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.307554 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:19 crc kubenswrapper[4768]: E0223 18:35:19.307679 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.404948 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.404997 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.405014 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.405037 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.405053 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:19Z","lastTransitionTime":"2026-02-23T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.507437 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.507817 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.507835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.507865 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.507885 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:19Z","lastTransitionTime":"2026-02-23T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.610397 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.610450 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.610465 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.610483 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.610496 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:19Z","lastTransitionTime":"2026-02-23T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.713507 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.713572 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.713589 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.713612 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.713629 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:19Z","lastTransitionTime":"2026-02-23T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.822561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.822612 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.822625 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.822646 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.822662 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:19Z","lastTransitionTime":"2026-02-23T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.881354 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 08:42:46.72084987 +0000 UTC Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.925114 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.925167 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.925186 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.925208 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.925224 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:19Z","lastTransitionTime":"2026-02-23T18:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.988649 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovnkube-controller/0.log" Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.994301 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerID="694539c5fcb0c75aba10efd5e74fd9edbdef2a4a3a2c73e23c2abb1ef5e290bc" exitCode=1 Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.994412 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerDied","Data":"694539c5fcb0c75aba10efd5e74fd9edbdef2a4a3a2c73e23c2abb1ef5e290bc"} Feb 23 18:35:19 crc kubenswrapper[4768]: I0223 18:35:19.995827 4768 scope.go:117] "RemoveContainer" containerID="694539c5fcb0c75aba10efd5e74fd9edbdef2a4a3a2c73e23c2abb1ef5e290bc" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.024891 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.029368 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.029418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.029432 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.029453 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.029468 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:20Z","lastTransitionTime":"2026-02-23T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.044458 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.061755 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.089120 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.100987 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.119843 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.133011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.133062 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.133072 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.133136 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.133155 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:20Z","lastTransitionTime":"2026-02-23T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.134660 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.146375 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.159163 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.173362 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.183237 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.193260 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.207505 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.228833 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://694539c5fcb0c75aba10efd5e74fd9edbdef2a4a3a2c73e23c2abb1ef5e290bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://694539c5fcb0c75aba10efd5e74fd9edbdef2a4a3a2c73e23c2abb1ef5e290bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:19Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 18:35:19.564211 6451 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:19.564273 6451 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 18:35:19.564280 6451 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 18:35:19.564296 6451 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:19.564302 6451 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 18:35:19.564315 6451 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:19.564360 6451 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:19.564465 6451 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:19.564503 6451 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:19.564512 6451 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:19.564564 6451 factory.go:656] Stopping watch factory\\\\nI0223 18:35:19.564598 6451 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:19.564657 6451 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:19.564685 6451 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 18:35:19.564699 6451 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.235974 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.236059 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.236077 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.236103 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.236121 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:20Z","lastTransitionTime":"2026-02-23T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.245438 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.260987 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:20Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.306687 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.306787 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:20 crc kubenswrapper[4768]: E0223 18:35:20.306837 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:20 crc kubenswrapper[4768]: E0223 18:35:20.307012 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.340372 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.340418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.340431 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.340454 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.340466 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:20Z","lastTransitionTime":"2026-02-23T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.442766 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.442801 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.442810 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.442825 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.442834 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:20Z","lastTransitionTime":"2026-02-23T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.545206 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.545269 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.545278 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.545296 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.545307 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:20Z","lastTransitionTime":"2026-02-23T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.647704 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.647793 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.647811 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.647851 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.647871 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:20Z","lastTransitionTime":"2026-02-23T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.750906 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.750944 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.750955 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.750973 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.750986 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:20Z","lastTransitionTime":"2026-02-23T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.853676 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.853722 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.853752 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.853772 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.853788 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:20Z","lastTransitionTime":"2026-02-23T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.882317 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:54:27.154084682 +0000 UTC Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.956955 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.957013 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.957029 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.957056 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:20 crc kubenswrapper[4768]: I0223 18:35:20.957077 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:20Z","lastTransitionTime":"2026-02-23T18:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.007967 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovnkube-controller/0.log" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.012471 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerStarted","Data":"3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4"} Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.013140 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.035587 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.059983 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.060110 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.060153 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.060166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.060192 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.060207 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:21Z","lastTransitionTime":"2026-02-23T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.076143 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.094139 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.128341 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://694539c5fcb0c75aba10efd5e74fd9edbdef2a4a3a2c73e23c2abb1ef5e290bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:19Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 18:35:19.564211 6451 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:19.564273 6451 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 18:35:19.564280 6451 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 18:35:19.564296 6451 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:19.564302 6451 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 18:35:19.564315 6451 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:19.564360 6451 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:19.564465 6451 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:19.564503 6451 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:19.564512 6451 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:19.564564 6451 factory.go:656] Stopping watch factory\\\\nI0223 18:35:19.564598 6451 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:19.564657 6451 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:19.564685 6451 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 18:35:19.564699 6451 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.148503 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.163366 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.163424 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.163444 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.163472 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.163499 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:21Z","lastTransitionTime":"2026-02-23T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.166750 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.185973 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.205195 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.223148 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.250093 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.266576 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.266634 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.266653 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.266675 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.266692 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:21Z","lastTransitionTime":"2026-02-23T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.269403 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.290974 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.306797 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:21 crc kubenswrapper[4768]: E0223 18:35:21.306948 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.310881 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.333025 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.351158 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:21Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.369838 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.369917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.369936 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.369970 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.369993 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:21Z","lastTransitionTime":"2026-02-23T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.473117 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.473182 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.473201 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.473228 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.473282 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:21Z","lastTransitionTime":"2026-02-23T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.576891 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.576950 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.576961 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.576988 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.576998 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:21Z","lastTransitionTime":"2026-02-23T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.679491 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.679668 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.679687 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.679717 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.679735 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:21Z","lastTransitionTime":"2026-02-23T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.783166 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.783226 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.783272 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.783300 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.783318 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:21Z","lastTransitionTime":"2026-02-23T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.883238 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 16:03:59.028124658 +0000 UTC Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.886578 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.886648 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.886674 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.886705 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.886725 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:21Z","lastTransitionTime":"2026-02-23T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.990326 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.990403 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.990422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.990451 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:21 crc kubenswrapper[4768]: I0223 18:35:21.990469 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:21Z","lastTransitionTime":"2026-02-23T18:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.019364 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovnkube-controller/1.log" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.020572 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovnkube-controller/0.log" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.024785 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerID="3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4" exitCode=1 Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.024845 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerDied","Data":"3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4"} Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.024892 4768 scope.go:117] "RemoveContainer" containerID="694539c5fcb0c75aba10efd5e74fd9edbdef2a4a3a2c73e23c2abb1ef5e290bc" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.026089 4768 scope.go:117] "RemoveContainer" containerID="3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4" Feb 23 18:35:22 crc kubenswrapper[4768]: E0223 18:35:22.026510 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.049296 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.073817 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.093978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.094056 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.094076 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.094104 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.094126 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:22Z","lastTransitionTime":"2026-02-23T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.096276 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.116097 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.131756 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.146922 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.170360 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.196473 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.202331 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.202392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.202549 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.203426 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.203457 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:22Z","lastTransitionTime":"2026-02-23T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.225820 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.249019 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.275054 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://694539c5fcb0c75aba10efd5e74fd9edbdef2a4a3a2c73e23c2abb1ef5e290bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:19Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 18:35:19.564211 6451 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:19.564273 6451 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 18:35:19.564280 6451 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 18:35:19.564296 6451 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:19.564302 6451 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 18:35:19.564315 6451 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:19.564360 6451 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:19.564465 6451 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:19.564503 6451 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:19.564512 6451 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:19.564564 6451 factory.go:656] Stopping watch factory\\\\nI0223 18:35:19.564598 6451 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:19.564657 6451 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:19.564685 6451 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 18:35:19.564699 6451 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:21Z\\\",\\\"message\\\":\\\"18:35:20.906139 6668 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:20.906091 6668 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:20.906285 6668 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 18:35:20.906313 6668 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:20.906328 6668 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 18:35:20.906350 6668 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 18:35:20.906380 6668 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:20.906429 6668 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:20.906441 6668 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:20.906469 6668 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 18:35:20.906492 6668 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:20.906496 6668 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:20.906526 6668 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 18:35:20.906549 6668 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 18:35:20.906569 6668 factory.go:656] Stopping watch factory\\\\nI0223 18:35:20.906596 6668 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.306495 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.306552 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.306572 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.306601 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.306621 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:22Z","lastTransitionTime":"2026-02-23T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.307053 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.307066 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:22 crc kubenswrapper[4768]: E0223 18:35:22.307207 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:22 crc kubenswrapper[4768]: E0223 18:35:22.307321 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.308801 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.327576 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.331888 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz"] Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.332713 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.334573 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.336219 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.351059 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.369197 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.385629 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.406456 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.408853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.408898 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.408917 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.408947 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.408970 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:22Z","lastTransitionTime":"2026-02-23T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.423958 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.434965 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70262447-d73b-4c4f-b551-9ee39758658f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xzkdz\" (UID: \"70262447-d73b-4c4f-b551-9ee39758658f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.435090 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhmvn\" (UniqueName: \"kubernetes.io/projected/70262447-d73b-4c4f-b551-9ee39758658f-kube-api-access-qhmvn\") pod \"ovnkube-control-plane-749d76644c-xzkdz\" (UID: \"70262447-d73b-4c4f-b551-9ee39758658f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.435188 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70262447-d73b-4c4f-b551-9ee39758658f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xzkdz\" (UID: \"70262447-d73b-4c4f-b551-9ee39758658f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.435227 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70262447-d73b-4c4f-b551-9ee39758658f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xzkdz\" (UID: \"70262447-d73b-4c4f-b551-9ee39758658f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.446000 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.465465 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.482130 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.502536 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.511794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.511865 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.511884 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.511913 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.511970 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:22Z","lastTransitionTime":"2026-02-23T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.522874 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.536215 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70262447-d73b-4c4f-b551-9ee39758658f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xzkdz\" (UID: \"70262447-d73b-4c4f-b551-9ee39758658f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.536390 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70262447-d73b-4c4f-b551-9ee39758658f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xzkdz\" (UID: \"70262447-d73b-4c4f-b551-9ee39758658f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.536464 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhmvn\" (UniqueName: \"kubernetes.io/projected/70262447-d73b-4c4f-b551-9ee39758658f-kube-api-access-qhmvn\") pod \"ovnkube-control-plane-749d76644c-xzkdz\" (UID: \"70262447-d73b-4c4f-b551-9ee39758658f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.536522 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70262447-d73b-4c4f-b551-9ee39758658f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xzkdz\" (UID: \"70262447-d73b-4c4f-b551-9ee39758658f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.537636 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/70262447-d73b-4c4f-b551-9ee39758658f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xzkdz\" (UID: \"70262447-d73b-4c4f-b551-9ee39758658f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.538395 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/70262447-d73b-4c4f-b551-9ee39758658f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xzkdz\" (UID: \"70262447-d73b-4c4f-b551-9ee39758658f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.540391 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.545761 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/70262447-d73b-4c4f-b551-9ee39758658f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xzkdz\" (UID: \"70262447-d73b-4c4f-b551-9ee39758658f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.554875 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhmvn\" (UniqueName: \"kubernetes.io/projected/70262447-d73b-4c4f-b551-9ee39758658f-kube-api-access-qhmvn\") pod \"ovnkube-control-plane-749d76644c-xzkdz\" (UID: \"70262447-d73b-4c4f-b551-9ee39758658f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.561073 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.581347 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.600427 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.614781 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.614853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.614877 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.614910 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.614935 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:22Z","lastTransitionTime":"2026-02-23T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.618203 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.642206 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.653455 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.667093 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: W0223 18:35:22.679039 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70262447_d73b_4c4f_b551_9ee39758658f.slice/crio-4fe88de28fc9f98d5b60c3af469e06ccca1476a13aed2233603b7b252c3f6de6 WatchSource:0}: Error finding container 4fe88de28fc9f98d5b60c3af469e06ccca1476a13aed2233603b7b252c3f6de6: Status 404 returned error can't find the container with id 4fe88de28fc9f98d5b60c3af469e06ccca1476a13aed2233603b7b252c3f6de6 Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.691838 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.713832 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.718524 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.718590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.718608 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.718640 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.718664 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:22Z","lastTransitionTime":"2026-02-23T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.741561 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://694539c5fcb0c75aba10efd5e74fd9edbdef2a4a3a2c73e23c2abb1ef5e290bc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:19Z\\\",\\\"message\\\":\\\"ice/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 18:35:19.564211 6451 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:19.564273 6451 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 18:35:19.564280 6451 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 18:35:19.564296 6451 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:19.564302 6451 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0223 18:35:19.564315 6451 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:19.564360 6451 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:19.564465 6451 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:19.564503 6451 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:19.564512 6451 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:19.564564 6451 factory.go:656] Stopping watch factory\\\\nI0223 18:35:19.564598 6451 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:19.564657 6451 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:19.564685 6451 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 18:35:19.564699 6451 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:21Z\\\",\\\"message\\\":\\\"18:35:20.906139 6668 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:20.906091 6668 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:20.906285 6668 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 18:35:20.906313 6668 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:20.906328 6668 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 18:35:20.906350 6668 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 18:35:20.906380 6668 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:20.906429 6668 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:20.906441 6668 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:20.906469 6668 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 18:35:20.906492 6668 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:20.906496 6668 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:20.906526 6668 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 18:35:20.906549 6668 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 18:35:20.906569 6668 factory.go:656] Stopping watch factory\\\\nI0223 18:35:20.906596 6668 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:22Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.821794 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.821842 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.821853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.821871 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.821883 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:22Z","lastTransitionTime":"2026-02-23T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.884399 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:28:14.734250129 +0000 UTC Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.925678 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.925733 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.925745 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.925767 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:22 crc kubenswrapper[4768]: I0223 18:35:22.925781 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:22Z","lastTransitionTime":"2026-02-23T18:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.029011 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.029064 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.029081 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.029108 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.029124 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:23Z","lastTransitionTime":"2026-02-23T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.031667 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" event={"ID":"70262447-d73b-4c4f-b551-9ee39758658f","Type":"ContainerStarted","Data":"eda95b2dc0e60c5e1a1a2cea47fddadaa71caa88e9649ccabbe8f217c50b8b35"} Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.031733 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" event={"ID":"70262447-d73b-4c4f-b551-9ee39758658f","Type":"ContainerStarted","Data":"4fe88de28fc9f98d5b60c3af469e06ccca1476a13aed2233603b7b252c3f6de6"} Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.034368 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovnkube-controller/1.log" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.042298 4768 scope.go:117] "RemoveContainer" containerID="3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4" Feb 23 18:35:23 crc kubenswrapper[4768]: E0223 18:35:23.042699 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.071973 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.093304 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.103857 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-9s8hm"] Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.104453 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:23 crc kubenswrapper[4768]: E0223 18:35:23.104513 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.112722 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.131659 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.131691 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.131702 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.131719 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.131731 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:23Z","lastTransitionTime":"2026-02-23T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.132385 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.150228 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.167676 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.184797 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.210070 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:21Z\\\",\\\"message\\\":\\\"18:35:20.906139 6668 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:20.906091 6668 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:20.906285 6668 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 18:35:20.906313 6668 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:20.906328 6668 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 18:35:20.906350 6668 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 18:35:20.906380 6668 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:20.906429 6668 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:20.906441 6668 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:20.906469 6668 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 18:35:20.906492 6668 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:20.906496 6668 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:20.906526 6668 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 18:35:20.906549 6668 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 18:35:20.906569 6668 factory.go:656] Stopping watch factory\\\\nI0223 18:35:20.906596 6668 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.231359 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.234143 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.234170 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.234179 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.234191 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.234199 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:23Z","lastTransitionTime":"2026-02-23T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.245524 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nnvz\" (UniqueName: \"kubernetes.io/projected/1bcfbee2-d95a-4f58-b436-5233d3691ee8-kube-api-access-6nnvz\") pod \"network-metrics-daemon-9s8hm\" (UID: \"1bcfbee2-d95a-4f58-b436-5233d3691ee8\") " pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.245558 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs\") pod \"network-metrics-daemon-9s8hm\" (UID: \"1bcfbee2-d95a-4f58-b436-5233d3691ee8\") " pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.245626 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.260447 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.269960 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.281842 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.292683 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.306634 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:23 crc kubenswrapper[4768]: E0223 18:35:23.306781 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.312007 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.325625 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.336893 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.336945 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.336957 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.336977 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.336991 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:23Z","lastTransitionTime":"2026-02-23T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.339456 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.346362 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nnvz\" (UniqueName: \"kubernetes.io/projected/1bcfbee2-d95a-4f58-b436-5233d3691ee8-kube-api-access-6nnvz\") pod \"network-metrics-daemon-9s8hm\" (UID: \"1bcfbee2-d95a-4f58-b436-5233d3691ee8\") " pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.346411 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs\") pod \"network-metrics-daemon-9s8hm\" (UID: \"1bcfbee2-d95a-4f58-b436-5233d3691ee8\") " pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:23 crc kubenswrapper[4768]: E0223 18:35:23.346564 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:23 crc kubenswrapper[4768]: E0223 18:35:23.346641 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs podName:1bcfbee2-d95a-4f58-b436-5233d3691ee8 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:23.846615857 +0000 UTC m=+119.237101677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs") pod "network-metrics-daemon-9s8hm" (UID: "1bcfbee2-d95a-4f58-b436-5233d3691ee8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.356109 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.368199 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nnvz\" (UniqueName: \"kubernetes.io/projected/1bcfbee2-d95a-4f58-b436-5233d3691ee8-kube-api-access-6nnvz\") pod \"network-metrics-daemon-9s8hm\" (UID: \"1bcfbee2-d95a-4f58-b436-5233d3691ee8\") " pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.372012 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.399911 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:21Z\\\",\\\"message\\\":\\\"18:35:20.906139 6668 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:20.906091 6668 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:20.906285 6668 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 18:35:20.906313 6668 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:20.906328 6668 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 18:35:20.906350 6668 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 18:35:20.906380 6668 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:20.906429 6668 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:20.906441 6668 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:20.906469 6668 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 18:35:20.906492 6668 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:20.906496 6668 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:20.906526 6668 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 18:35:20.906549 6668 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 18:35:20.906569 6668 factory.go:656] Stopping watch factory\\\\nI0223 18:35:20.906596 6668 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.422023 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.434629 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.438714 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.438748 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.438760 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.438778 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.438790 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:23Z","lastTransitionTime":"2026-02-23T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.450749 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.460816 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.471260 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.480736 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.502748 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.515895 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.528131 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.542590 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.542632 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.542647 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.542665 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.542677 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:23Z","lastTransitionTime":"2026-02-23T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.545021 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.557081 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.566477 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.582063 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.592486 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcfbee2-d95a-4f58-b436-5233d3691ee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9s8hm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.601938 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:23Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.645596 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.645663 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.645690 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.645722 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.645750 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:23Z","lastTransitionTime":"2026-02-23T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.748337 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.748422 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.748441 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.748465 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.748483 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:23Z","lastTransitionTime":"2026-02-23T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.850372 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs\") pod \"network-metrics-daemon-9s8hm\" (UID: \"1bcfbee2-d95a-4f58-b436-5233d3691ee8\") " pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:23 crc kubenswrapper[4768]: E0223 18:35:23.850638 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:23 crc kubenswrapper[4768]: E0223 18:35:23.850753 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs podName:1bcfbee2-d95a-4f58-b436-5233d3691ee8 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:24.850717095 +0000 UTC m=+120.241202935 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs") pod "network-metrics-daemon-9s8hm" (UID: "1bcfbee2-d95a-4f58-b436-5233d3691ee8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.852131 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.852199 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.852216 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.852240 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.852295 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:23Z","lastTransitionTime":"2026-02-23T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.885431 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:05:50.930383286 +0000 UTC Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.955372 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.955454 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.955473 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.955500 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:23 crc kubenswrapper[4768]: I0223 18:35:23.955520 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:23Z","lastTransitionTime":"2026-02-23T18:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.048757 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" event={"ID":"70262447-d73b-4c4f-b551-9ee39758658f","Type":"ContainerStarted","Data":"d78881ae1a72848acea3fbdc5e2a6144e441061da697d1e36426e49ecc3e05a7"} Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.059099 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.059159 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.059177 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.059203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.059221 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:24Z","lastTransitionTime":"2026-02-23T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.072673 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.092851 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.111406 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.125673 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.141137 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.154729 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.161341 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.161381 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.161392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.161408 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.161423 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:24Z","lastTransitionTime":"2026-02-23T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.173794 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.191986 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcfbee2-d95a-4f58-b436-5233d3691ee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9s8hm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.213326 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.233446 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.254053 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.263763 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.263819 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.263836 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.263864 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.263885 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:24Z","lastTransitionTime":"2026-02-23T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.287004 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:21Z\\\",\\\"message\\\":\\\"18:35:20.906139 6668 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:20.906091 6668 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:20.906285 6668 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 18:35:20.906313 6668 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:20.906328 6668 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 18:35:20.906350 6668 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 18:35:20.906380 6668 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:20.906429 6668 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:20.906441 6668 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:20.906469 6668 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 18:35:20.906492 6668 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:20.906496 6668 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:20.906526 6668 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 18:35:20.906549 6668 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 18:35:20.906569 6668 factory.go:656] Stopping watch factory\\\\nI0223 18:35:20.906596 6668 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.304356 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.307483 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.307731 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:24 crc kubenswrapper[4768]: E0223 18:35:24.307928 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:24 crc kubenswrapper[4768]: E0223 18:35:24.308190 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.322652 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda95b2dc0e60c5e1a1a2cea47fddadaa71caa88e9649ccabbe8f217c50b8b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78881ae1a72848acea3fbdc5e2a6144e441061da697d1e36426e49ecc3e05a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.358809 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.367158 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.367223 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.367242 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.367295 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.367314 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:24Z","lastTransitionTime":"2026-02-23T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.378881 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.400591 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.420357 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:24Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.470418 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.470490 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.470511 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.470537 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.470556 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:24Z","lastTransitionTime":"2026-02-23T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.574294 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.574371 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.574394 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.574431 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.574455 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:24Z","lastTransitionTime":"2026-02-23T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.678015 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.678074 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.678091 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.678116 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.678134 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:24Z","lastTransitionTime":"2026-02-23T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.781740 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.781835 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.781853 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.781879 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.781898 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:24Z","lastTransitionTime":"2026-02-23T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.861723 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs\") pod \"network-metrics-daemon-9s8hm\" (UID: \"1bcfbee2-d95a-4f58-b436-5233d3691ee8\") " pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:24 crc kubenswrapper[4768]: E0223 18:35:24.861923 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:24 crc kubenswrapper[4768]: E0223 18:35:24.861999 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs podName:1bcfbee2-d95a-4f58-b436-5233d3691ee8 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:26.861973385 +0000 UTC m=+122.252459225 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs") pod "network-metrics-daemon-9s8hm" (UID: "1bcfbee2-d95a-4f58-b436-5233d3691ee8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.885203 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.885306 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.885335 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.885368 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.885387 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:24Z","lastTransitionTime":"2026-02-23T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.885574 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:57:45.651239226 +0000 UTC Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.988779 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.988830 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.988847 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.988871 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:24 crc kubenswrapper[4768]: I0223 18:35:24.988889 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:24Z","lastTransitionTime":"2026-02-23T18:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.092494 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.092565 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.092582 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.092606 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.092624 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:25Z","lastTransitionTime":"2026-02-23T18:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.195414 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.195468 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.195484 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.195506 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.195524 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:25Z","lastTransitionTime":"2026-02-23T18:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:25 crc kubenswrapper[4768]: E0223 18:35:25.296370 4768 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.307595 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.307785 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:25 crc kubenswrapper[4768]: E0223 18:35:25.307883 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:25 crc kubenswrapper[4768]: E0223 18:35:25.308041 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.320941 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.324038 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcfbee2-d95a-4f58-b436-5233d3691ee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9s8hm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.342492 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.361694 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.389655 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.433620 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:21Z\\\",\\\"message\\\":\\\"18:35:20.906139 6668 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:20.906091 6668 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:20.906285 6668 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 18:35:20.906313 6668 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:20.906328 6668 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 18:35:20.906350 6668 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 18:35:20.906380 6668 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:20.906429 6668 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:20.906441 6668 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:20.906469 6668 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 18:35:20.906492 6668 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:20.906496 6668 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:20.906526 6668 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 18:35:20.906549 6668 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 18:35:20.906569 6668 factory.go:656] Stopping watch factory\\\\nI0223 18:35:20.906596 6668 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.459354 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.483083 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.506693 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.525651 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.539382 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.555617 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda95b2dc0e60c5e1a1a2cea47fddadaa71caa88e9649ccabbe8f217c50b8b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78881ae1a72848acea3fbdc5e2a6144e441061da697d1e36426e49ecc3e05a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.589591 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.607759 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.628519 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.647927 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.668734 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.689481 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.711157 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:25Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:25 crc kubenswrapper[4768]: E0223 18:35:25.879750 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 18:35:25 crc kubenswrapper[4768]: I0223 18:35:25.886760 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:47:41.244998433 +0000 UTC Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.307452 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.307512 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:26 crc kubenswrapper[4768]: E0223 18:35:26.307667 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:26 crc kubenswrapper[4768]: E0223 18:35:26.307793 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.356898 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.356954 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.356970 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.356994 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.357011 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:26Z","lastTransitionTime":"2026-02-23T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:26 crc kubenswrapper[4768]: E0223 18:35:26.378482 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:26Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.384119 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.384202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.384220 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.384280 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.384299 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:26Z","lastTransitionTime":"2026-02-23T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:26 crc kubenswrapper[4768]: E0223 18:35:26.403667 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:26Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.408749 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.408809 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.408825 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.408850 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.408868 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:26Z","lastTransitionTime":"2026-02-23T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:26 crc kubenswrapper[4768]: E0223 18:35:26.432113 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:26Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.438561 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.438603 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.438620 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.438645 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.438663 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:26Z","lastTransitionTime":"2026-02-23T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:26 crc kubenswrapper[4768]: E0223 18:35:26.460503 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:26Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.465573 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.465626 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.465642 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.465666 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.465686 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:26Z","lastTransitionTime":"2026-02-23T18:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:26 crc kubenswrapper[4768]: E0223 18:35:26.486423 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:26Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:26 crc kubenswrapper[4768]: E0223 18:35:26.486646 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.888459 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:40:14.681917239 +0000 UTC Feb 23 18:35:26 crc kubenswrapper[4768]: I0223 18:35:26.892478 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs\") pod \"network-metrics-daemon-9s8hm\" (UID: \"1bcfbee2-d95a-4f58-b436-5233d3691ee8\") " pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:26 crc kubenswrapper[4768]: E0223 18:35:26.892594 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:26 crc kubenswrapper[4768]: E0223 18:35:26.892694 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs podName:1bcfbee2-d95a-4f58-b436-5233d3691ee8 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:30.892668869 +0000 UTC m=+126.283154709 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs") pod "network-metrics-daemon-9s8hm" (UID: "1bcfbee2-d95a-4f58-b436-5233d3691ee8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:27 crc kubenswrapper[4768]: I0223 18:35:27.306839 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:27 crc kubenswrapper[4768]: I0223 18:35:27.306926 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:27 crc kubenswrapper[4768]: E0223 18:35:27.307035 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:27 crc kubenswrapper[4768]: E0223 18:35:27.307124 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:27 crc kubenswrapper[4768]: I0223 18:35:27.889432 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 10:03:44.758676372 +0000 UTC Feb 23 18:35:28 crc kubenswrapper[4768]: I0223 18:35:28.307175 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:28 crc kubenswrapper[4768]: E0223 18:35:28.307692 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:28 crc kubenswrapper[4768]: I0223 18:35:28.307196 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:28 crc kubenswrapper[4768]: E0223 18:35:28.308064 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:28 crc kubenswrapper[4768]: I0223 18:35:28.890554 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 03:10:51.820561792 +0000 UTC Feb 23 18:35:29 crc kubenswrapper[4768]: I0223 18:35:29.306803 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:29 crc kubenswrapper[4768]: I0223 18:35:29.306834 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:29 crc kubenswrapper[4768]: E0223 18:35:29.307021 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:29 crc kubenswrapper[4768]: E0223 18:35:29.307154 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:29 crc kubenswrapper[4768]: I0223 18:35:29.891075 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:39:42.837763828 +0000 UTC Feb 23 18:35:30 crc kubenswrapper[4768]: I0223 18:35:30.307085 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:30 crc kubenswrapper[4768]: I0223 18:35:30.307120 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:30 crc kubenswrapper[4768]: E0223 18:35:30.307320 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:30 crc kubenswrapper[4768]: E0223 18:35:30.307524 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:30 crc kubenswrapper[4768]: E0223 18:35:30.881419 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 18:35:30 crc kubenswrapper[4768]: I0223 18:35:30.891882 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 06:27:07.807931908 +0000 UTC Feb 23 18:35:30 crc kubenswrapper[4768]: I0223 18:35:30.940857 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs\") pod \"network-metrics-daemon-9s8hm\" (UID: \"1bcfbee2-d95a-4f58-b436-5233d3691ee8\") " pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:30 crc kubenswrapper[4768]: E0223 18:35:30.941125 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:30 crc kubenswrapper[4768]: E0223 18:35:30.941235 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs podName:1bcfbee2-d95a-4f58-b436-5233d3691ee8 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:38.941206814 +0000 UTC m=+134.331692654 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs") pod "network-metrics-daemon-9s8hm" (UID: "1bcfbee2-d95a-4f58-b436-5233d3691ee8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:31 crc kubenswrapper[4768]: I0223 18:35:31.307134 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:31 crc kubenswrapper[4768]: I0223 18:35:31.307322 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:31 crc kubenswrapper[4768]: E0223 18:35:31.307480 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:31 crc kubenswrapper[4768]: E0223 18:35:31.307584 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:31 crc kubenswrapper[4768]: I0223 18:35:31.893215 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:33:03.724547065 +0000 UTC Feb 23 18:35:32 crc kubenswrapper[4768]: I0223 18:35:32.307426 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:32 crc kubenswrapper[4768]: I0223 18:35:32.307469 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:32 crc kubenswrapper[4768]: E0223 18:35:32.307631 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:32 crc kubenswrapper[4768]: E0223 18:35:32.307765 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:32 crc kubenswrapper[4768]: I0223 18:35:32.893376 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:41:35.374347873 +0000 UTC Feb 23 18:35:33 crc kubenswrapper[4768]: I0223 18:35:33.306540 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:33 crc kubenswrapper[4768]: E0223 18:35:33.306755 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:33 crc kubenswrapper[4768]: I0223 18:35:33.306913 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:33 crc kubenswrapper[4768]: E0223 18:35:33.307137 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:33 crc kubenswrapper[4768]: I0223 18:35:33.893580 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 18:55:33.306018704 +0000 UTC Feb 23 18:35:34 crc kubenswrapper[4768]: I0223 18:35:34.307106 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:34 crc kubenswrapper[4768]: I0223 18:35:34.307120 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:34 crc kubenswrapper[4768]: E0223 18:35:34.307347 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:34 crc kubenswrapper[4768]: E0223 18:35:34.307480 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:34 crc kubenswrapper[4768]: I0223 18:35:34.894768 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 13:12:44.746396134 +0000 UTC Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.307537 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.307726 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:35 crc kubenswrapper[4768]: E0223 18:35:35.308180 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:35 crc kubenswrapper[4768]: E0223 18:35:35.308205 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.326885 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.342410 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.365943 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.382708 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcfbee2-d95a-4f58-b436-5233d3691ee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9s8hm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.410219 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.432117 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.452024 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.481661 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:21Z\\\",\\\"message\\\":\\\"18:35:20.906139 6668 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:20.906091 6668 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:20.906285 6668 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 18:35:20.906313 6668 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:20.906328 6668 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 18:35:20.906350 6668 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 18:35:20.906380 6668 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:20.906429 6668 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:20.906441 6668 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:20.906469 6668 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 18:35:20.906492 6668 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:20.906496 6668 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:20.906526 6668 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 18:35:20.906549 6668 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 18:35:20.906569 6668 factory.go:656] Stopping watch factory\\\\nI0223 18:35:20.906596 6668 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.508699 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.527475 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.547876 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.567458 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.583815 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.601397 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda95b2dc0e60c5e1a1a2cea47fddadaa71caa88e9649ccabbe8f217c50b8b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78881ae1a72848acea3fbdc5e2a6144e441061da697d1e36426e49ecc3e05a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.623190 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.642148 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b0769b-baaa-4eb4-a544-1b89662b8c18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83d80f628682805293fe75a0fa82f948179ec7cdb28b5376049360588801fc29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd3d8876e35ff0cfeb97d286b0cf18bd2517d4a7359ca2f8b98a267171e5aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ce49ba410b6075055afc3f7f1ec5d046d9cca0e07b108d29e151851df355dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.662761 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.682804 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.699600 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.820489 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.843371 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.864150 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: E0223 18:35:35.882858 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.884367 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.895940 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:24:11.837391548 +0000 UTC Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.921803 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:21Z\\\",\\\"message\\\":\\\"18:35:20.906139 6668 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:20.906091 6668 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:20.906285 6668 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 18:35:20.906313 6668 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:20.906328 6668 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 18:35:20.906350 6668 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 18:35:20.906380 6668 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:20.906429 6668 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:20.906441 6668 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:20.906469 6668 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 18:35:20.906492 6668 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:20.906496 6668 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:20.906526 6668 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 18:35:20.906549 6668 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 18:35:20.906569 6668 factory.go:656] Stopping watch factory\\\\nI0223 18:35:20.906596 6668 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.957382 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:35 crc kubenswrapper[4768]: I0223 18:35:35.977762 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.001564 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:35Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.020876 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.038444 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.057507 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda95b2dc0e60c5e1a1a2cea47fddadaa71caa88e9649ccabbe8f217c50b8b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78881ae1a72848acea3fbdc5e2a6144e441061da697d1e36426e49ecc3e05a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.078014 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.100071 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b0769b-baaa-4eb4-a544-1b89662b8c18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83d80f628682805293fe75a0fa82f948179ec7cdb28b5376049360588801fc29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd3d8876e35ff0cfeb97d286b0cf18bd2517d4a7359ca2f8b98a267171e5aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ce49ba410b6075055afc3f7f1ec5d046d9cca0e07b108d29e151851df355dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.120908 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.140757 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.160935 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.176833 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.193344 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.215750 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.233988 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcfbee2-d95a-4f58-b436-5233d3691ee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9s8hm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.306934 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.306988 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:36 crc kubenswrapper[4768]: E0223 18:35:36.307131 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:36 crc kubenswrapper[4768]: E0223 18:35:36.307300 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.867047 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.867396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.867588 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.867719 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.867864 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:36Z","lastTransitionTime":"2026-02-23T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:36 crc kubenswrapper[4768]: E0223 18:35:36.888325 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.893227 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.893311 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.893329 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.893351 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.893370 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:36Z","lastTransitionTime":"2026-02-23T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.896962 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:08:50.546918121 +0000 UTC Feb 23 18:35:36 crc kubenswrapper[4768]: E0223 18:35:36.913811 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.919022 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.919107 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.919128 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.919151 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.919168 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:36Z","lastTransitionTime":"2026-02-23T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:36 crc kubenswrapper[4768]: E0223 18:35:36.939986 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.945957 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.946025 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.946043 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.946070 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.946089 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:36Z","lastTransitionTime":"2026-02-23T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:36 crc kubenswrapper[4768]: E0223 18:35:36.964856 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.970321 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.970401 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.970420 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.970447 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:36 crc kubenswrapper[4768]: I0223 18:35:36.970465 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:36Z","lastTransitionTime":"2026-02-23T18:35:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:36 crc kubenswrapper[4768]: E0223 18:35:36.987110 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:36Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:36 crc kubenswrapper[4768]: E0223 18:35:36.987388 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 18:35:37 crc kubenswrapper[4768]: I0223 18:35:37.307168 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:37 crc kubenswrapper[4768]: E0223 18:35:37.307387 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:37 crc kubenswrapper[4768]: I0223 18:35:37.307560 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:37 crc kubenswrapper[4768]: E0223 18:35:37.308314 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:37 crc kubenswrapper[4768]: I0223 18:35:37.897578 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 09:54:43.721224641 +0000 UTC Feb 23 18:35:38 crc kubenswrapper[4768]: I0223 18:35:38.307298 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:38 crc kubenswrapper[4768]: I0223 18:35:38.307784 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:38 crc kubenswrapper[4768]: E0223 18:35:38.307972 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:38 crc kubenswrapper[4768]: E0223 18:35:38.308448 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:38 crc kubenswrapper[4768]: I0223 18:35:38.308618 4768 scope.go:117] "RemoveContainer" containerID="3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4" Feb 23 18:35:38 crc kubenswrapper[4768]: I0223 18:35:38.898643 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:43:23.338960329 +0000 UTC Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.026200 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs\") pod \"network-metrics-daemon-9s8hm\" (UID: \"1bcfbee2-d95a-4f58-b436-5233d3691ee8\") " pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:39 crc kubenswrapper[4768]: E0223 18:35:39.026431 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:39 crc kubenswrapper[4768]: E0223 18:35:39.026537 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs podName:1bcfbee2-d95a-4f58-b436-5233d3691ee8 nodeName:}" failed. No retries permitted until 2026-02-23 18:35:55.026510767 +0000 UTC m=+150.416996577 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs") pod "network-metrics-daemon-9s8hm" (UID: "1bcfbee2-d95a-4f58-b436-5233d3691ee8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.114569 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovnkube-controller/1.log" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.117620 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerStarted","Data":"925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb"} Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.118014 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.139345 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:21Z\\\",\\\"message\\\":\\\"18:35:20.906139 6668 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:20.906091 6668 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:20.906285 6668 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 18:35:20.906313 6668 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:20.906328 6668 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 18:35:20.906350 6668 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 18:35:20.906380 6668 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:20.906429 6668 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:20.906441 6668 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:20.906469 6668 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 18:35:20.906492 6668 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:20.906496 6668 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:20.906526 6668 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 18:35:20.906549 6668 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 18:35:20.906569 6668 factory.go:656] Stopping watch factory\\\\nI0223 18:35:20.906596 6668 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.151783 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.168062 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.181118 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.191800 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.203431 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.217149 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda95b2dc0e60c5e1a1a2cea47fddadaa71caa88e9649ccabbe8f217c50b8b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78881ae1a72848acea3fbdc5e2a6144e441061da697d1e36426e49ecc3e05a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.246771 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.262962 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.281943 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.297345 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.307441 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.307532 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:39 crc kubenswrapper[4768]: E0223 18:35:39.307630 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:39 crc kubenswrapper[4768]: E0223 18:35:39.307780 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.315126 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.328617 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.344620 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b0769b-baaa-4eb4-a544-1b89662b8c18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83d80f628682805293fe75a0fa82f948179ec7cdb28b5376049360588801fc29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd3d8876e35ff0cfeb97d286b0cf18bd2517d4a7359ca2f8b98a267171e5aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ce49ba410b6075055afc3f7f1ec5d046d9cca0e07b108d29e151851df355dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.361645 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.372202 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcfbee2-d95a-4f58-b436-5233d3691ee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9s8hm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.431054 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.442992 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.463408 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:39Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:39 crc kubenswrapper[4768]: I0223 18:35:39.899196 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:34:11.903857196 +0000 UTC Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.123572 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovnkube-controller/2.log" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.124514 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovnkube-controller/1.log" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.128137 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerID="925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb" exitCode=1 Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.128189 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerDied","Data":"925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb"} Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.128236 4768 scope.go:117] "RemoveContainer" containerID="3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.129061 4768 scope.go:117] "RemoveContainer" containerID="925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb" Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.129299 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.147641 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.171528 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.194801 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.215300 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.235992 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b0769b-baaa-4eb4-a544-1b89662b8c18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83d80f628682805293fe75a0fa82f948179ec7cdb28b5376049360588801fc29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd3d8876e35ff0cfeb97d286b0cf18bd2517d4a7359ca2f8b98a267171e5aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ce49ba410b6075055afc3f7f1ec5d046d9cca0e07b108d29e151851df355dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.240136 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.240377 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:44.24033313 +0000 UTC m=+199.630818960 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.240681 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.241005 4768 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.241171 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:36:44.241140802 +0000 UTC m=+199.631626642 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.241040 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.241493 4768 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.241696 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-23 18:36:44.241671007 +0000 UTC m=+199.632156847 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.260122 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.279213 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcfbee2-d95a-4f58-b436-5233d3691ee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9s8hm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.298079 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.307654 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.307681 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.308617 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.308704 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.317015 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.339056 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.342800 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.342967 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.343069 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.343118 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.343136 4768 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.343173 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.343201 4768 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.343219 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-23 18:36:44.343194998 +0000 UTC m=+199.733680808 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.343227 4768 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.343338 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-23 18:36:44.343312241 +0000 UTC m=+199.733798081 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.371356 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3ddbbbc3d2e8c6dd54f2684c3fb8a282e6f3e5e3f8a1a21ecc9900475c900ed4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:21Z\\\",\\\"message\\\":\\\"18:35:20.906139 6668 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18:35:20.906091 6668 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0223 18:35:20.906285 6668 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0223 18:35:20.906313 6668 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0223 18:35:20.906328 6668 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0223 18:35:20.906350 6668 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0223 18:35:20.906380 6668 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0223 18:35:20.906429 6668 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0223 18:35:20.906441 6668 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0223 18:35:20.906469 6668 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0223 18:35:20.906492 6668 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0223 18:35:20.906496 6668 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:20.906526 6668 handler.go:208] Removed *v1.Node event handler 7\\\\nI0223 18:35:20.906549 6668 handler.go:208] Removed *v1.Node event handler 2\\\\nI0223 18:35:20.906569 6668 factory.go:656] Stopping watch factory\\\\nI0223 18:35:20.906596 6668 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:39Z\\\",\\\"message\\\":\\\"ip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 18:35:39.372626 6902 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372768 6902 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372901 6902 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372972 6902 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.373090 6902 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 18:35:39.373736 6902 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:39.373822 6902 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 18:35:39.373836 6902 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 18:35:39.373868 6902 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 18:35:39.373896 6902 factory.go:656] Stopping watch factory\\\\nI0223 18:35:39.373918 6902 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:39.373915 6902 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.393304 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.413813 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.434687 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.452748 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.468751 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.484229 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda95b2dc0e60c5e1a1a2cea47fddadaa71caa88e9649ccabbe8f217c50b8b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78881ae1a72848acea3fbdc5e2a6144e441061da697d1e36426e49ecc3e05a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.518386 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.536970 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:40Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:40 crc kubenswrapper[4768]: E0223 18:35:40.884884 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 18:35:40 crc kubenswrapper[4768]: I0223 18:35:40.900499 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 21:05:17.138810698 +0000 UTC Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.133792 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovnkube-controller/2.log" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.139571 4768 scope.go:117] "RemoveContainer" containerID="925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb" Feb 23 18:35:41 crc kubenswrapper[4768]: E0223 18:35:41.140148 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.161522 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.180665 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b0769b-baaa-4eb4-a544-1b89662b8c18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83d80f628682805293fe75a0fa82f948179ec7cdb28b5376049360588801fc29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd3d8876e35ff0cfeb97d286b0cf18bd2517d4a7359ca2f8b98a267171e5aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ce49ba410b6075055afc3f7f1ec5d046d9cca0e07b108d29e151851df355dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.199367 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.218724 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.236173 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.249162 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.264198 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.289431 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.306408 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcfbee2-d95a-4f58-b436-5233d3691ee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9s8hm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.307098 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.307209 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:41 crc kubenswrapper[4768]: E0223 18:35:41.307503 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:41 crc kubenswrapper[4768]: E0223 18:35:41.307687 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.328907 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.350552 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.369324 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.402941 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:39Z\\\",\\\"message\\\":\\\"ip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 18:35:39.372626 6902 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372768 6902 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372901 6902 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372972 6902 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.373090 6902 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 18:35:39.373736 6902 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:39.373822 6902 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 18:35:39.373836 6902 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 18:35:39.373868 6902 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 18:35:39.373896 6902 factory.go:656] Stopping watch factory\\\\nI0223 18:35:39.373918 6902 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:39.373915 6902 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.438850 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.459776 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.476742 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.494534 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.510344 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.529945 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda95b2dc0e60c5e1a1a2cea47fddadaa71caa88e9649ccabbe8f217c50b8b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78881ae1a72848acea3fbdc5e2a6144e441061da697d1e36426e49ecc3e05a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:41Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:41 crc kubenswrapper[4768]: I0223 18:35:41.901151 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 09:29:37.460848652 +0000 UTC Feb 23 18:35:42 crc kubenswrapper[4768]: I0223 18:35:42.306728 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:42 crc kubenswrapper[4768]: I0223 18:35:42.306788 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:42 crc kubenswrapper[4768]: E0223 18:35:42.307507 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:42 crc kubenswrapper[4768]: E0223 18:35:42.307676 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:42 crc kubenswrapper[4768]: I0223 18:35:42.901841 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 20:36:07.535770136 +0000 UTC Feb 23 18:35:43 crc kubenswrapper[4768]: I0223 18:35:43.307107 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:43 crc kubenswrapper[4768]: I0223 18:35:43.307176 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:43 crc kubenswrapper[4768]: E0223 18:35:43.307441 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:43 crc kubenswrapper[4768]: E0223 18:35:43.307662 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:43 crc kubenswrapper[4768]: I0223 18:35:43.902237 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 18:58:15.241084343 +0000 UTC Feb 23 18:35:44 crc kubenswrapper[4768]: I0223 18:35:44.307597 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:44 crc kubenswrapper[4768]: I0223 18:35:44.308053 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:44 crc kubenswrapper[4768]: E0223 18:35:44.308454 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:44 crc kubenswrapper[4768]: E0223 18:35:44.308483 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:44 crc kubenswrapper[4768]: I0223 18:35:44.904022 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:13:04.836642875 +0000 UTC Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.306638 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:45 crc kubenswrapper[4768]: E0223 18:35:45.306893 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.308677 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:45 crc kubenswrapper[4768]: E0223 18:35:45.308873 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.332034 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.347741 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.365956 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.394509 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:39Z\\\",\\\"message\\\":\\\"ip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 18:35:39.372626 6902 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372768 6902 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372901 6902 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372972 6902 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.373090 6902 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 18:35:39.373736 6902 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:39.373822 6902 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 18:35:39.373836 6902 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 18:35:39.373868 6902 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 18:35:39.373896 6902 factory.go:656] Stopping watch factory\\\\nI0223 18:35:39.373918 6902 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:39.373915 6902 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.430430 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.448999 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.466518 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.480604 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.492851 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.509560 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda95b2dc0e60c5e1a1a2cea47fddadaa71caa88e9649ccabbe8f217c50b8b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78881ae1a72848acea3fbdc5e2a6144e441061da697d1e36426e49ecc3e05a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.528392 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.543578 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b0769b-baaa-4eb4-a544-1b89662b8c18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83d80f628682805293fe75a0fa82f948179ec7cdb28b5376049360588801fc29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd3d8876e35ff0cfeb97d286b0cf18bd2517d4a7359ca2f8b98a267171e5aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ce49ba410b6075055afc3f7f1ec5d046d9cca0e07b108d29e151851df355dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.563489 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.583642 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.599177 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.611744 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.626397 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.646166 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.662877 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcfbee2-d95a-4f58-b436-5233d3691ee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9s8hm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:45Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:45 crc kubenswrapper[4768]: E0223 18:35:45.886799 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 18:35:45 crc kubenswrapper[4768]: I0223 18:35:45.904847 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:56:10.050390887 +0000 UTC Feb 23 18:35:46 crc kubenswrapper[4768]: I0223 18:35:46.307177 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:46 crc kubenswrapper[4768]: I0223 18:35:46.307236 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:46 crc kubenswrapper[4768]: E0223 18:35:46.307432 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:46 crc kubenswrapper[4768]: E0223 18:35:46.307562 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:46 crc kubenswrapper[4768]: I0223 18:35:46.905411 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:51:07.609910313 +0000 UTC Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.307130 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:47 crc kubenswrapper[4768]: E0223 18:35:47.307489 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.308108 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:47 crc kubenswrapper[4768]: E0223 18:35:47.308335 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.385855 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.385903 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.385920 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.385943 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.385959 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:47Z","lastTransitionTime":"2026-02-23T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:47 crc kubenswrapper[4768]: E0223 18:35:47.406302 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:47Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.411928 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.411968 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.411980 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.412009 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.412026 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:47Z","lastTransitionTime":"2026-02-23T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:47 crc kubenswrapper[4768]: E0223 18:35:47.428801 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:47Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.432836 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.432871 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.432881 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.432898 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.432908 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:47Z","lastTransitionTime":"2026-02-23T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:47 crc kubenswrapper[4768]: E0223 18:35:47.449786 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:47Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.453197 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.453221 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.453232 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.453262 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.453273 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:47Z","lastTransitionTime":"2026-02-23T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:47 crc kubenswrapper[4768]: E0223 18:35:47.463617 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:47Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.470339 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.470367 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.470376 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.470392 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.470405 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:47Z","lastTransitionTime":"2026-02-23T18:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:47 crc kubenswrapper[4768]: E0223 18:35:47.488587 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:47Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:47 crc kubenswrapper[4768]: E0223 18:35:47.488815 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 18:35:47 crc kubenswrapper[4768]: I0223 18:35:47.905995 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:13:53.853612043 +0000 UTC Feb 23 18:35:48 crc kubenswrapper[4768]: I0223 18:35:48.306616 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:48 crc kubenswrapper[4768]: I0223 18:35:48.306617 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:48 crc kubenswrapper[4768]: E0223 18:35:48.306854 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:48 crc kubenswrapper[4768]: E0223 18:35:48.306984 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:48 crc kubenswrapper[4768]: I0223 18:35:48.906675 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 13:35:34.076801248 +0000 UTC Feb 23 18:35:49 crc kubenswrapper[4768]: I0223 18:35:49.307160 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:49 crc kubenswrapper[4768]: I0223 18:35:49.307211 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:49 crc kubenswrapper[4768]: E0223 18:35:49.307398 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:49 crc kubenswrapper[4768]: E0223 18:35:49.307558 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:49 crc kubenswrapper[4768]: I0223 18:35:49.907192 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:37:19.915743087 +0000 UTC Feb 23 18:35:50 crc kubenswrapper[4768]: I0223 18:35:50.307484 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:50 crc kubenswrapper[4768]: I0223 18:35:50.307484 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:50 crc kubenswrapper[4768]: E0223 18:35:50.307840 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:50 crc kubenswrapper[4768]: E0223 18:35:50.307683 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:50 crc kubenswrapper[4768]: E0223 18:35:50.888656 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 18:35:50 crc kubenswrapper[4768]: I0223 18:35:50.908820 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 03:00:44.209501678 +0000 UTC Feb 23 18:35:51 crc kubenswrapper[4768]: I0223 18:35:51.307599 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:51 crc kubenswrapper[4768]: I0223 18:35:51.307634 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:51 crc kubenswrapper[4768]: E0223 18:35:51.308438 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:51 crc kubenswrapper[4768]: E0223 18:35:51.308529 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:51 crc kubenswrapper[4768]: I0223 18:35:51.909206 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:19:03.173061073 +0000 UTC Feb 23 18:35:52 crc kubenswrapper[4768]: I0223 18:35:52.306595 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:52 crc kubenswrapper[4768]: I0223 18:35:52.306652 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:52 crc kubenswrapper[4768]: E0223 18:35:52.306813 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:52 crc kubenswrapper[4768]: E0223 18:35:52.307000 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:52 crc kubenswrapper[4768]: I0223 18:35:52.910434 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 13:10:12.915435569 +0000 UTC Feb 23 18:35:53 crc kubenswrapper[4768]: I0223 18:35:53.307431 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:53 crc kubenswrapper[4768]: I0223 18:35:53.307935 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:53 crc kubenswrapper[4768]: E0223 18:35:53.308108 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:53 crc kubenswrapper[4768]: E0223 18:35:53.308461 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:53 crc kubenswrapper[4768]: I0223 18:35:53.911405 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 06:51:44.445054046 +0000 UTC Feb 23 18:35:54 crc kubenswrapper[4768]: I0223 18:35:54.307019 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:54 crc kubenswrapper[4768]: I0223 18:35:54.307122 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:54 crc kubenswrapper[4768]: E0223 18:35:54.307309 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:54 crc kubenswrapper[4768]: E0223 18:35:54.307539 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:54 crc kubenswrapper[4768]: I0223 18:35:54.911535 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 23:53:38.863160303 +0000 UTC Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.112659 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs\") pod \"network-metrics-daemon-9s8hm\" (UID: \"1bcfbee2-d95a-4f58-b436-5233d3691ee8\") " pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:55 crc kubenswrapper[4768]: E0223 18:35:55.112886 4768 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:55 crc kubenswrapper[4768]: E0223 18:35:55.113016 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs podName:1bcfbee2-d95a-4f58-b436-5233d3691ee8 nodeName:}" failed. No retries permitted until 2026-02-23 18:36:27.112971162 +0000 UTC m=+182.503457002 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs") pod "network-metrics-daemon-9s8hm" (UID: "1bcfbee2-d95a-4f58-b436-5233d3691ee8") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.307676 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:55 crc kubenswrapper[4768]: E0223 18:35:55.307897 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.307676 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:55 crc kubenswrapper[4768]: E0223 18:35:55.308331 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.315239 4768 scope.go:117] "RemoveContainer" containerID="925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb" Feb 23 18:35:55 crc kubenswrapper[4768]: E0223 18:35:55.316311 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.352461 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:39Z\\\",\\\"message\\\":\\\"ip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 18:35:39.372626 6902 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372768 6902 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372901 6902 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372972 6902 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.373090 6902 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 18:35:39.373736 6902 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:39.373822 6902 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 18:35:39.373836 6902 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 18:35:39.373868 6902 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 18:35:39.373896 6902 factory.go:656] Stopping watch factory\\\\nI0223 18:35:39.373918 6902 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:39.373915 6902 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.380343 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.406820 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.429522 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.450924 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.466683 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.484455 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda95b2dc0e60c5e1a1a2cea47fddadaa71caa88e9649ccabbe8f217c50b8b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78881ae1a72848acea3fbdc5e2a6144e441061da697d1e36426e49ecc3e05a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.518916 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.546576 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.577301 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.593647 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.609161 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.621056 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.634461 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b0769b-baaa-4eb4-a544-1b89662b8c18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83d80f628682805293fe75a0fa82f948179ec7cdb28b5376049360588801fc29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd3d8876e35ff0cfeb97d286b0cf18bd2517d4a7359ca2f8b98a267171e5aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ce49ba410b6075055afc3f7f1ec5d046d9cca0e07b108d29e151851df355dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.645058 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.655734 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcfbee2-d95a-4f58-b436-5233d3691ee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9s8hm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.668730 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.679597 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.697785 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:55Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:55 crc kubenswrapper[4768]: E0223 18:35:55.889602 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 18:35:55 crc kubenswrapper[4768]: I0223 18:35:55.911690 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 01:11:06.684189538 +0000 UTC Feb 23 18:35:56 crc kubenswrapper[4768]: I0223 18:35:56.307354 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:56 crc kubenswrapper[4768]: I0223 18:35:56.307354 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:56 crc kubenswrapper[4768]: E0223 18:35:56.307578 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:56 crc kubenswrapper[4768]: E0223 18:35:56.307748 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:56 crc kubenswrapper[4768]: I0223 18:35:56.912832 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:59:59.247446603 +0000 UTC Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.203034 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rcq8b_1c7d1a60-c63e-4279-9ce9-4eea677d4a70/kube-multus/0.log" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.203150 4768 generic.go:334] "Generic (PLEG): container finished" podID="1c7d1a60-c63e-4279-9ce9-4eea677d4a70" containerID="f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1" exitCode=1 Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.203205 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rcq8b" event={"ID":"1c7d1a60-c63e-4279-9ce9-4eea677d4a70","Type":"ContainerDied","Data":"f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1"} Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.203943 4768 scope.go:117] "RemoveContainer" containerID="f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.228076 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.251016 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b0769b-baaa-4eb4-a544-1b89662b8c18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83d80f628682805293fe75a0fa82f948179ec7cdb28b5376049360588801fc29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd3d8876e35ff0cfeb97d286b0cf18bd2517d4a7359ca2f8b98a267171e5aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ce49ba410b6075055afc3f7f1ec5d046d9cca0e07b108d29e151851df355dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.277718 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.297970 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.307600 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.307661 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:57 crc kubenswrapper[4768]: E0223 18:35:57.307739 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:57 crc kubenswrapper[4768]: E0223 18:35:57.307816 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.320442 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.337703 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.356630 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.383968 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.401908 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcfbee2-d95a-4f58-b436-5233d3691ee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9s8hm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.426066 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.445287 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.463854 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.493854 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:39Z\\\",\\\"message\\\":\\\"ip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 18:35:39.372626 6902 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372768 6902 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372901 6902 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372972 6902 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.373090 6902 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 18:35:39.373736 6902 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:39.373822 6902 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 18:35:39.373836 6902 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 18:35:39.373868 6902 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 18:35:39.373896 6902 factory.go:656] Stopping watch factory\\\\nI0223 18:35:39.373918 6902 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:39.373915 6902 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.528197 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.546879 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.567815 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:56Z\\\",\\\"message\\\":\\\"2026-02-23T18:35:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1317e746-16b4-4a90-8b54-67f50380c333\\\\n2026-02-23T18:35:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1317e746-16b4-4a90-8b54-67f50380c333 to /host/opt/cni/bin/\\\\n2026-02-23T18:35:11Z [verbose] multus-daemon started\\\\n2026-02-23T18:35:11Z [verbose] Readiness Indicator file check\\\\n2026-02-23T18:35:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.584275 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.613965 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.632451 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda95b2dc0e60c5e1a1a2cea47fddadaa71caa88e9649ccabbe8f217c50b8b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78881ae1a72848acea3fbdc5e2a6144e441061da697d1e36426e49ecc3e05a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.759236 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.759343 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.759366 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.759396 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.759419 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:57Z","lastTransitionTime":"2026-02-23T18:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:57 crc kubenswrapper[4768]: E0223 18:35:57.780810 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.786024 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.786100 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.786121 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.786147 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.786163 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:57Z","lastTransitionTime":"2026-02-23T18:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:57 crc kubenswrapper[4768]: E0223 18:35:57.806147 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.811202 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.811283 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.811310 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.811334 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.811351 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:57Z","lastTransitionTime":"2026-02-23T18:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:57 crc kubenswrapper[4768]: E0223 18:35:57.832411 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.837352 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.837446 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.837464 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.837490 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.837508 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:57Z","lastTransitionTime":"2026-02-23T18:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:57 crc kubenswrapper[4768]: E0223 18:35:57.857284 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.862309 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.862373 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.862395 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.862427 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.862450 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:35:57Z","lastTransitionTime":"2026-02-23T18:35:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:35:57 crc kubenswrapper[4768]: E0223 18:35:57.882482 4768 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"572e458a-3489-410c-99b8-d0bc0a8b7420\\\",\\\"systemUUID\\\":\\\"43a108d7-6740-4b29-827b-176ca14f7e0c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:57Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:57 crc kubenswrapper[4768]: E0223 18:35:57.882704 4768 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 23 18:35:57 crc kubenswrapper[4768]: I0223 18:35:57.913920 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:03:17.847396738 +0000 UTC Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.210322 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rcq8b_1c7d1a60-c63e-4279-9ce9-4eea677d4a70/kube-multus/0.log" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.210405 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rcq8b" event={"ID":"1c7d1a60-c63e-4279-9ce9-4eea677d4a70","Type":"ContainerStarted","Data":"d3b7f73b42148e3f5e6ed0ffd0636c98340964ba5b2b7c0cb0970f40c037b49d"} Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.231312 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.266481 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dfa4db1d-97c7-44ee-be87-27167edeb9a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:39Z\\\",\\\"message\\\":\\\"ip/v1/apis/informers/externalversions/factory.go:140\\\\nI0223 18:35:39.372626 6902 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372768 6902 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372901 6902 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.372972 6902 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0223 18:35:39.373090 6902 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0223 18:35:39.373736 6902 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0223 18:35:39.373822 6902 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0223 18:35:39.373836 6902 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0223 18:35:39.373868 6902 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0223 18:35:39.373896 6902 factory.go:656] Stopping watch factory\\\\nI0223 18:35:39.373918 6902 ovnkube.go:599] Stopped ovnkube\\\\nI0223 18:35:39.373915 6902 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0223 18\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:38Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbxnc_openshift-ovn-kubernetes(dfa4db1d-97c7-44ee-be87-27167edeb9a9)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxnx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbxnc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.291364 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:32Z\\\",\\\"message\\\":\\\"le observer\\\\nW0223 18:34:31.789486 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0223 18:34:31.789967 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0223 18:34:31.791500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3323487754/tls.crt::/tmp/serving-cert-3323487754/tls.key\\\\\\\"\\\\nI0223 18:34:32.254848 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0223 18:34:32.257783 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0223 18:34:32.257805 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0223 18:34:32.257830 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0223 18:34:32.257838 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0223 18:34:32.264461 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0223 18:34:32.264475 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0223 18:34:32.264515 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264527 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0223 18:34:32.264535 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0223 18:34:32.264540 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0223 18:34:32.264545 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0223 18:34:32.264550 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0223 18:34:32.266677 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:34:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.306637 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.306728 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:35:58 crc kubenswrapper[4768]: E0223 18:35:58.306875 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:35:58 crc kubenswrapper[4768]: E0223 18:35:58.307095 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.314768 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.334768 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rcq8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1c7d1a60-c63e-4279-9ce9-4eea677d4a70\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3b7f73b42148e3f5e6ed0ffd0636c98340964ba5b2b7c0cb0970f40c037b49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-23T18:35:56Z\\\",\\\"message\\\":\\\"2026-02-23T18:35:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1317e746-16b4-4a90-8b54-67f50380c333\\\\n2026-02-23T18:35:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1317e746-16b4-4a90-8b54-67f50380c333 to /host/opt/cni/bin/\\\\n2026-02-23T18:35:11Z [verbose] multus-daemon started\\\\n2026-02-23T18:35:11Z [verbose] Readiness Indicator file check\\\\n2026-02-23T18:35:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lfc2m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rcq8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.363349 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fe5b6d612c542a478e7f4aa294ef9ccc511f640c07624efe0b93c3e135b4ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4q7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-zckb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.379021 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hqtsz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cb748656-160a-49c5-a1b0-ce5949aaf631\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0cd6cee2366826becdc0329962d53d573810590d1f5ce35ca487a0a17e8163bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rvggb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:16Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hqtsz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.397445 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70262447-d73b-4c4f-b551-9ee39758658f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eda95b2dc0e60c5e1a1a2cea47fddadaa71caa88e9649ccabbe8f217c50b8b35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d78881ae1a72848acea3fbdc5e2a6144e441061da697d1e36426e49ecc3e05a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhmvn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xzkdz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.432095 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d18f89c-9d0e-479f-b535-8478f9de3a95\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea576c140c6c33f3d8ce49cccb62476be1e19f23259753323086b6988508288a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5180df27e3c6156b3ab8ec5f714d6225aac8277077177d8db06a5b4fc95bbff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22f37ca34ace1896fba5a609362eb7a35c61c95f2daeb8e80230a33cc34b28d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8f89493596d02d898947ce549180d03d48dbacd29e694046c2c6e730db5dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55a45c9182190f8a4eecaa9d03edbdd1f66699ef77cca51cb680f9ae86fc1c94\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3dd6b59d647167f2e5e9bbd1e3f08304977a4083d8c3f133f6924c15d2802567\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://500ada6c59225b234a73957ca905a3541a8d047e04f97b819ad0ff8976e5336f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://29da4914bedc953bdc5ba739eb80521c5437cf63afbbed6a0228a08a29e8a579\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.452953 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://806876abe87d6f41d17de1ed5f705ad5a414dec25f8128cb672f5a469c1fe12b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.475472 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5f9ea41b2b5529cd1bf4c5bea72df9ea1c8c7375d9ebad4a6f44485d3f1fdd8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.496519 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.518388 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b8db7be21e23134ad7c5d1a9fac5b724162a1421e907ed562b52b3815e471c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1e9988f1662fa2ecd2c44eaa3cb1789d7167cce142de5c0f5d17fe11852451e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.540344 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557ec8fa-b65e-4f74-8901-a1672977bf6f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85073f768ac345f7438884386c404f4f7f8ebb10f9dc98c66d951627fdd83163\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e86fb50dd0aa5f1e128bedbad23b83a6852c7285c7ad8435c74dfcc0f0ff0ed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-23T18:34:20Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0223 18:33:51.469623 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0223 18:33:51.471101 1 observer_polling.go:159] Starting file observer\\\\nI0223 18:33:51.472942 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0223 18:33:51.474109 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0223 18:34:15.655692 1 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials\\\\nI0223 18:34:20.980412 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0223 18:34:20.980550 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:51Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:34:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9966ec354b094bb11f97c0a46e9a082801eaf8b7237a78bf93cf7b9ee799e430\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f418e0cadb6b289f5f50ff6b8a40160463b30ffdab42ac35b12947c9764a605f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.561621 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39b0769b-baaa-4eb4-a544-1b89662b8c18\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83d80f628682805293fe75a0fa82f948179ec7cdb28b5376049360588801fc29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd3d8876e35ff0cfeb97d286b0cf18bd2517d4a7359ca2f8b98a267171e5aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7ce49ba410b6075055afc3f7f1ec5d046d9cca0e07b108d29e151851df355dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8f0eddc1c8a057774d6929f1ae30913be3be6861101ca68f6fdd1842779e83d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.586224 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bvntk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1947d9c5-33dd-4b10-8e84-e40f16a47a63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dde5386d4c5c9eadb658e9573ccaa8d454981fc4c4407b363677d9bb96032f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e957289f1ea70931cb1b56570ce9ebbd4e06f8bf7183f014b99fac146c34a694\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e40fee2dcdbbbc4dd5dee466a975b464086c07954dc26698652b71749040c6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80af571c424dabe5dc21be2048318548f5bccdea2c79db8cda4bdad71d46e96f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd55ac90b350c582ed77b23134b82287a914faf06c0410fe05e8de89052f2164\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e4aca61dad4d417bf9458db2c000e08bf931502eacafda6c3c42a878046df69\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a0dc76033dfd0b6778687f9daac4e9642d805311883a347f43f9d5f5de62e23\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:35:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x2r29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bvntk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.603999 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bcfbee2-d95a-4f58-b436-5233d3691ee8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nnvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9s8hm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.621811 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6fd1d03b-9b48-4047-8f7d-743866cc6662\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:33:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fde01d75e8b347663827d69abdcad23a47be279d3a4d842f5c297ab176eb54fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:33:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bea82e7ee2d530048ab4a1da26441833926b47a8ef311180453900492c46ef14\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-23T18:33:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-23T18:33:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:33:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.641956 4768 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2d9sk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75a70ce4-e083-4488-9538-100e05969dfd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-23T18:35:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b7ef415124a8f20f0033bd456f88aaa7c7cd42a87a98ab378b6fd086db4571\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-23T18:35:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jqktc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-23T18:35:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2d9sk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-23T18:35:58Z is after 2025-08-24T17:21:41Z" Feb 23 18:35:58 crc kubenswrapper[4768]: I0223 18:35:58.914452 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:06:40.342506738 +0000 UTC Feb 23 18:35:59 crc kubenswrapper[4768]: I0223 18:35:59.307537 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:35:59 crc kubenswrapper[4768]: I0223 18:35:59.307560 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:35:59 crc kubenswrapper[4768]: E0223 18:35:59.308008 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:35:59 crc kubenswrapper[4768]: E0223 18:35:59.308188 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:35:59 crc kubenswrapper[4768]: I0223 18:35:59.915554 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 15:48:56.76015502 +0000 UTC Feb 23 18:36:00 crc kubenswrapper[4768]: I0223 18:36:00.306927 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:00 crc kubenswrapper[4768]: I0223 18:36:00.306939 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:36:00 crc kubenswrapper[4768]: E0223 18:36:00.307115 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:36:00 crc kubenswrapper[4768]: E0223 18:36:00.307228 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:36:00 crc kubenswrapper[4768]: E0223 18:36:00.891401 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 18:36:00 crc kubenswrapper[4768]: I0223 18:36:00.915988 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 15:11:02.482114899 +0000 UTC Feb 23 18:36:01 crc kubenswrapper[4768]: I0223 18:36:01.306625 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:36:01 crc kubenswrapper[4768]: I0223 18:36:01.306790 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:36:01 crc kubenswrapper[4768]: E0223 18:36:01.306875 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:36:01 crc kubenswrapper[4768]: E0223 18:36:01.306994 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:36:01 crc kubenswrapper[4768]: I0223 18:36:01.916465 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:59:29.676183894 +0000 UTC Feb 23 18:36:02 crc kubenswrapper[4768]: I0223 18:36:02.307313 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:36:02 crc kubenswrapper[4768]: I0223 18:36:02.307384 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:02 crc kubenswrapper[4768]: E0223 18:36:02.307510 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:36:02 crc kubenswrapper[4768]: E0223 18:36:02.307659 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:36:02 crc kubenswrapper[4768]: I0223 18:36:02.917708 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 09:59:18.755340778 +0000 UTC Feb 23 18:36:03 crc kubenswrapper[4768]: I0223 18:36:03.307700 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:36:03 crc kubenswrapper[4768]: I0223 18:36:03.307872 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:36:03 crc kubenswrapper[4768]: E0223 18:36:03.308081 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:36:03 crc kubenswrapper[4768]: E0223 18:36:03.308362 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:36:03 crc kubenswrapper[4768]: I0223 18:36:03.918811 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 09:44:45.345777161 +0000 UTC Feb 23 18:36:04 crc kubenswrapper[4768]: I0223 18:36:04.306891 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:36:04 crc kubenswrapper[4768]: I0223 18:36:04.306891 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:04 crc kubenswrapper[4768]: E0223 18:36:04.307093 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:36:04 crc kubenswrapper[4768]: E0223 18:36:04.307193 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:36:04 crc kubenswrapper[4768]: I0223 18:36:04.919451 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:32:19.273684485 +0000 UTC Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.306762 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.306807 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:36:05 crc kubenswrapper[4768]: E0223 18:36:05.307142 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:36:05 crc kubenswrapper[4768]: E0223 18:36:05.307289 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.345326 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.345293099 podStartE2EDuration="1m29.345293099s" podCreationTimestamp="2026-02-23 18:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:05.344807386 +0000 UTC m=+160.735293216" watchObservedRunningTime="2026-02-23 18:36:05.345293099 +0000 UTC m=+160.735778939" Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.459351 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xzkdz" podStartSLOduration=99.459329633 podStartE2EDuration="1m39.459329633s" podCreationTimestamp="2026-02-23 18:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:05.458569233 +0000 UTC m=+160.849055083" watchObservedRunningTime="2026-02-23 18:36:05.459329633 +0000 UTC m=+160.849815463" Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.502698 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=74.502666645 podStartE2EDuration="1m14.502666645s" podCreationTimestamp="2026-02-23 18:34:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:05.502655795 +0000 UTC m=+160.893141645" watchObservedRunningTime="2026-02-23 18:36:05.502666645 +0000 UTC m=+160.893152475" Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.539198 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rcq8b" podStartSLOduration=100.539179259 podStartE2EDuration="1m40.539179259s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:05.538591943 +0000 UTC m=+160.929077823" watchObservedRunningTime="2026-02-23 18:36:05.539179259 +0000 UTC m=+160.929665069" Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.553636 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podStartSLOduration=100.553609796 podStartE2EDuration="1m40.553609796s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:05.553418091 +0000 UTC m=+160.943903901" watchObservedRunningTime="2026-02-23 18:36:05.553609796 +0000 UTC m=+160.944095616" Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.567639 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-hqtsz" podStartSLOduration=100.567619801 podStartE2EDuration="1m40.567619801s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:05.56721502 +0000 UTC m=+160.957700880" watchObservedRunningTime="2026-02-23 18:36:05.567619801 +0000 UTC m=+160.958105611" Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.587089 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=85.587070216 podStartE2EDuration="1m25.587070216s" podCreationTimestamp="2026-02-23 18:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:05.587047025 +0000 UTC m=+160.977532895" watchObservedRunningTime="2026-02-23 18:36:05.587070216 +0000 UTC m=+160.977556026" Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.603617 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.603595 podStartE2EDuration="40.603595s" podCreationTimestamp="2026-02-23 18:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:05.602446889 +0000 UTC m=+160.992932699" watchObservedRunningTime="2026-02-23 18:36:05.603595 +0000 UTC m=+160.994080810" Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.675645 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=83.675619391 podStartE2EDuration="1m23.675619391s" podCreationTimestamp="2026-02-23 18:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:05.674808808 +0000 UTC m=+161.065294628" watchObservedRunningTime="2026-02-23 18:36:05.675619391 +0000 UTC m=+161.066105231" Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.710551 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-2d9sk" podStartSLOduration=100.71052834 podStartE2EDuration="1m40.71052834s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:05.69089957 +0000 UTC m=+161.081385380" watchObservedRunningTime="2026-02-23 18:36:05.71052834 +0000 UTC m=+161.101014160" Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.710670 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bvntk" podStartSLOduration=100.710662624 podStartE2EDuration="1m40.710662624s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:05.709914643 +0000 UTC m=+161.100400463" watchObservedRunningTime="2026-02-23 18:36:05.710662624 +0000 UTC m=+161.101148444" Feb 23 18:36:05 crc kubenswrapper[4768]: E0223 18:36:05.893373 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 18:36:05 crc kubenswrapper[4768]: I0223 18:36:05.920379 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:33:20.371689591 +0000 UTC Feb 23 18:36:06 crc kubenswrapper[4768]: I0223 18:36:06.307307 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:06 crc kubenswrapper[4768]: I0223 18:36:06.307353 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:36:06 crc kubenswrapper[4768]: E0223 18:36:06.307526 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:36:06 crc kubenswrapper[4768]: E0223 18:36:06.307711 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:36:06 crc kubenswrapper[4768]: I0223 18:36:06.921536 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:04:28.438709554 +0000 UTC Feb 23 18:36:07 crc kubenswrapper[4768]: I0223 18:36:07.307329 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:36:07 crc kubenswrapper[4768]: I0223 18:36:07.307762 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:36:07 crc kubenswrapper[4768]: E0223 18:36:07.308157 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:36:07 crc kubenswrapper[4768]: E0223 18:36:07.308342 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:36:07 crc kubenswrapper[4768]: I0223 18:36:07.308734 4768 scope.go:117] "RemoveContainer" containerID="925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb" Feb 23 18:36:07 crc kubenswrapper[4768]: I0223 18:36:07.921945 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 17:52:51.034162844 +0000 UTC Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.013934 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.013978 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.013990 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.014010 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.014020 4768 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-23T18:36:08Z","lastTransitionTime":"2026-02-23T18:36:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.055005 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c"] Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.055476 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.057204 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.057610 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.057647 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.060426 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.164135 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6424693f-50e5-4c6d-887a-9b159956590e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.164216 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6424693f-50e5-4c6d-887a-9b159956590e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.164239 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6424693f-50e5-4c6d-887a-9b159956590e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.164283 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6424693f-50e5-4c6d-887a-9b159956590e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.164298 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6424693f-50e5-4c6d-887a-9b159956590e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.251999 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovnkube-controller/2.log" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.255756 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerStarted","Data":"24e883ed5968ad2aea0d730fc1c6b926281a7fe9bcc2898e80b8cbb9b2cb5f09"} Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.256488 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.264831 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6424693f-50e5-4c6d-887a-9b159956590e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.264873 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6424693f-50e5-4c6d-887a-9b159956590e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.264897 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6424693f-50e5-4c6d-887a-9b159956590e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.264916 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6424693f-50e5-4c6d-887a-9b159956590e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.264941 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6424693f-50e5-4c6d-887a-9b159956590e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.265015 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6424693f-50e5-4c6d-887a-9b159956590e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.265239 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6424693f-50e5-4c6d-887a-9b159956590e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.266116 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6424693f-50e5-4c6d-887a-9b159956590e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.275577 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6424693f-50e5-4c6d-887a-9b159956590e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.284560 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9s8hm"] Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.284712 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:36:08 crc kubenswrapper[4768]: E0223 18:36:08.284885 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.303637 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6424693f-50e5-4c6d-887a-9b159956590e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pql7c\" (UID: \"6424693f-50e5-4c6d-887a-9b159956590e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.308963 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:08 crc kubenswrapper[4768]: E0223 18:36:08.309194 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.309367 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:36:08 crc kubenswrapper[4768]: E0223 18:36:08.309724 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.312358 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podStartSLOduration=103.312344605 podStartE2EDuration="1m43.312344605s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:08.310797192 +0000 UTC m=+163.701283012" watchObservedRunningTime="2026-02-23 18:36:08.312344605 +0000 UTC m=+163.702830415" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.366827 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.923345 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:54:55.580541481 +0000 UTC Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.924794 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 23 18:36:08 crc kubenswrapper[4768]: I0223 18:36:08.933411 4768 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 23 18:36:09 crc kubenswrapper[4768]: I0223 18:36:09.264214 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" event={"ID":"6424693f-50e5-4c6d-887a-9b159956590e","Type":"ContainerStarted","Data":"71943f5cc5fb5e5d4c84b4aa910b5cb5abc8663036f20c4efbe5855b3d2a560d"} Feb 23 18:36:09 crc kubenswrapper[4768]: I0223 18:36:09.264349 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" event={"ID":"6424693f-50e5-4c6d-887a-9b159956590e","Type":"ContainerStarted","Data":"3ce7ef0aa244f7fd6364d909e0b82a449ed450fd00047d13060d2065f4dab415"} Feb 23 18:36:09 crc kubenswrapper[4768]: I0223 18:36:09.284965 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pql7c" podStartSLOduration=104.284938855 podStartE2EDuration="1m44.284938855s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:09.284887193 +0000 UTC m=+164.675372993" watchObservedRunningTime="2026-02-23 18:36:09.284938855 +0000 UTC m=+164.675424695" Feb 23 18:36:09 crc kubenswrapper[4768]: I0223 18:36:09.307213 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:36:09 crc kubenswrapper[4768]: E0223 18:36:09.307692 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:36:10 crc kubenswrapper[4768]: I0223 18:36:10.307455 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:36:10 crc kubenswrapper[4768]: I0223 18:36:10.307524 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:10 crc kubenswrapper[4768]: I0223 18:36:10.307600 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:36:10 crc kubenswrapper[4768]: E0223 18:36:10.307673 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:36:10 crc kubenswrapper[4768]: E0223 18:36:10.307882 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:36:10 crc kubenswrapper[4768]: E0223 18:36:10.307994 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:36:10 crc kubenswrapper[4768]: E0223 18:36:10.894771 4768 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 23 18:36:11 crc kubenswrapper[4768]: I0223 18:36:11.307578 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:36:11 crc kubenswrapper[4768]: E0223 18:36:11.307739 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:36:12 crc kubenswrapper[4768]: I0223 18:36:12.307460 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:36:12 crc kubenswrapper[4768]: I0223 18:36:12.307580 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:36:12 crc kubenswrapper[4768]: E0223 18:36:12.307671 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:36:12 crc kubenswrapper[4768]: E0223 18:36:12.307780 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:36:12 crc kubenswrapper[4768]: I0223 18:36:12.308454 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:12 crc kubenswrapper[4768]: E0223 18:36:12.308703 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:36:13 crc kubenswrapper[4768]: I0223 18:36:13.306977 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:36:13 crc kubenswrapper[4768]: E0223 18:36:13.307194 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:36:14 crc kubenswrapper[4768]: I0223 18:36:14.307292 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:14 crc kubenswrapper[4768]: I0223 18:36:14.307340 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:36:14 crc kubenswrapper[4768]: I0223 18:36:14.307448 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:36:14 crc kubenswrapper[4768]: E0223 18:36:14.309787 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 23 18:36:14 crc kubenswrapper[4768]: E0223 18:36:14.309872 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9s8hm" podUID="1bcfbee2-d95a-4f58-b436-5233d3691ee8" Feb 23 18:36:14 crc kubenswrapper[4768]: E0223 18:36:14.309924 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 23 18:36:15 crc kubenswrapper[4768]: I0223 18:36:15.306749 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:36:15 crc kubenswrapper[4768]: E0223 18:36:15.308277 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 23 18:36:16 crc kubenswrapper[4768]: I0223 18:36:16.306925 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:16 crc kubenswrapper[4768]: I0223 18:36:16.306935 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:36:16 crc kubenswrapper[4768]: I0223 18:36:16.307029 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:36:16 crc kubenswrapper[4768]: I0223 18:36:16.311775 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 18:36:16 crc kubenswrapper[4768]: I0223 18:36:16.311771 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 18:36:16 crc kubenswrapper[4768]: I0223 18:36:16.311815 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 18:36:16 crc kubenswrapper[4768]: I0223 18:36:16.312794 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 18:36:16 crc kubenswrapper[4768]: I0223 18:36:16.313012 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 18:36:16 crc kubenswrapper[4768]: I0223 18:36:16.313007 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 18:36:17 crc kubenswrapper[4768]: I0223 18:36:17.307539 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.426120 4768 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.484434 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.485213 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lsn69"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.485575 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.485770 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.485661 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.486578 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4hmsw"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.486806 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.486957 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.487515 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.488519 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.494878 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.495520 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.495827 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.497961 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.498882 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.499193 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.499303 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mz6w6"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.500034 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mz6w6" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.500515 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.503489 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.503665 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.503742 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.503879 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.503975 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.504062 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.504164 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.505368 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.505441 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.506100 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.506758 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.506953 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.508055 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.508598 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.508955 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.508597 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.509941 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.510431 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.513086 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.513940 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.515836 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.522073 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s56mb"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.522508 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.522877 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-v9856"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.523348 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jdbtb"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.523762 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.523950 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.524476 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.543828 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.544217 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.548649 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.549780 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.549976 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.550230 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.550888 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.562579 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.563167 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.568735 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.568909 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.568967 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.569087 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.570006 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.570107 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.570424 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ggqts"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.571002 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ggqts" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.571136 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.571686 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.572092 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.572186 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.572270 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.572499 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.572674 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.572751 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.572836 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.572868 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.572899 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.572940 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.572974 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.573027 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.573077 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.573099 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.573151 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.573168 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.573227 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.573239 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.573334 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.573357 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.573624 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.573964 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.573961 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.574226 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.574274 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.574587 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.574807 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.578518 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.578796 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.578945 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.579051 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.581454 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2q4n2"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.582286 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2q4n2" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.583670 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.584128 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.585904 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.587102 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.591240 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.591587 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.591643 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.591921 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.592021 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.592038 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.592300 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.591973 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.592451 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vn4nn"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.592554 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.592643 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.592703 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.592657 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.592832 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.592932 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.593083 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.593278 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nnn8b"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.593904 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597121 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597331 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597358 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49xwf\" (UniqueName: \"kubernetes.io/projected/78cada77-daaa-4a63-acf4-12499986ea25-kube-api-access-49xwf\") pod \"openshift-config-operator-7777fb866f-dxkwc\" (UID: \"78cada77-daaa-4a63-acf4-12499986ea25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597378 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m244c\" (UniqueName: \"kubernetes.io/projected/d1dce2ce-c431-4bc4-9f67-043f68609576-kube-api-access-m244c\") pod \"openshift-apiserver-operator-796bbdcf4f-2tb29\" (UID: \"d1dce2ce-c431-4bc4-9f67-043f68609576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597394 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3af14597-4b62-431a-939a-2e7c3592a896-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597413 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbh69\" (UniqueName: \"kubernetes.io/projected/be973663-1808-4081-8531-a5a03e55eafb-kube-api-access-cbh69\") pod \"cluster-samples-operator-665b6dd947-25ssm\" (UID: \"be973663-1808-4081-8531-a5a03e55eafb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597430 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-audit-dir\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597444 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597461 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd8a7de4-dae6-4dfa-afbf-656370147b87-auth-proxy-config\") pod \"machine-approver-56656f9798-7ghtp\" (UID: \"cd8a7de4-dae6-4dfa-afbf-656370147b87\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597477 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6107cc-bc15-45b5-807a-a41c4ecefca6-config\") pod \"console-operator-58897d9998-4hmsw\" (UID: \"2d6107cc-bc15-45b5-807a-a41c4ecefca6\") " pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597493 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597509 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3af14597-4b62-431a-939a-2e7c3592a896-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597524 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3af14597-4b62-431a-939a-2e7c3592a896-encryption-config\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597542 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8tfb\" (UniqueName: \"kubernetes.io/projected/cd8a7de4-dae6-4dfa-afbf-656370147b87-kube-api-access-k8tfb\") pod \"machine-approver-56656f9798-7ghtp\" (UID: \"cd8a7de4-dae6-4dfa-afbf-656370147b87\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597560 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597576 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3af14597-4b62-431a-939a-2e7c3592a896-audit-policies\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597590 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-audit-policies\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597612 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/be973663-1808-4081-8531-a5a03e55eafb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-25ssm\" (UID: \"be973663-1808-4081-8531-a5a03e55eafb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597629 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78cada77-daaa-4a63-acf4-12499986ea25-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dxkwc\" (UID: \"78cada77-daaa-4a63-acf4-12499986ea25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597645 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24444\" (UniqueName: \"kubernetes.io/projected/3af14597-4b62-431a-939a-2e7c3592a896-kube-api-access-24444\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597678 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597705 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597726 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597740 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqcsv\" (UniqueName: \"kubernetes.io/projected/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-kube-api-access-wqcsv\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597753 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d6107cc-bc15-45b5-807a-a41c4ecefca6-trusted-ca\") pod \"console-operator-58897d9998-4hmsw\" (UID: \"2d6107cc-bc15-45b5-807a-a41c4ecefca6\") " pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597771 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78cada77-daaa-4a63-acf4-12499986ea25-serving-cert\") pod \"openshift-config-operator-7777fb866f-dxkwc\" (UID: \"78cada77-daaa-4a63-acf4-12499986ea25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597785 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cd8a7de4-dae6-4dfa-afbf-656370147b87-machine-approver-tls\") pod \"machine-approver-56656f9798-7ghtp\" (UID: \"cd8a7de4-dae6-4dfa-afbf-656370147b87\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597799 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597819 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597839 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1dce2ce-c431-4bc4-9f67-043f68609576-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2tb29\" (UID: \"d1dce2ce-c431-4bc4-9f67-043f68609576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597855 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597871 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597887 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6107cc-bc15-45b5-807a-a41c4ecefca6-serving-cert\") pod \"console-operator-58897d9998-4hmsw\" (UID: \"2d6107cc-bc15-45b5-807a-a41c4ecefca6\") " pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597903 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3af14597-4b62-431a-939a-2e7c3592a896-audit-dir\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.597964 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.601209 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zh7hl"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.599451 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-699s5\" (UniqueName: \"kubernetes.io/projected/2d6107cc-bc15-45b5-807a-a41c4ecefca6-kube-api-access-699s5\") pod \"console-operator-58897d9998-4hmsw\" (UID: \"2d6107cc-bc15-45b5-807a-a41c4ecefca6\") " pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.604174 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1dce2ce-c431-4bc4-9f67-043f68609576-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2tb29\" (UID: \"d1dce2ce-c431-4bc4-9f67-043f68609576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.604213 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3af14597-4b62-431a-939a-2e7c3592a896-etcd-client\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.604241 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8a7de4-dae6-4dfa-afbf-656370147b87-config\") pod \"machine-approver-56656f9798-7ghtp\" (UID: \"cd8a7de4-dae6-4dfa-afbf-656370147b87\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.604280 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3af14597-4b62-431a-939a-2e7c3592a896-serving-cert\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.604298 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gxjf\" (UniqueName: \"kubernetes.io/projected/26f1fca3-79fa-4717-8b2b-dbdad99057cc-kube-api-access-4gxjf\") pod \"downloads-7954f5f757-mz6w6\" (UID: \"26f1fca3-79fa-4717-8b2b-dbdad99057cc\") " pod="openshift-console/downloads-7954f5f757-mz6w6" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.605583 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.616926 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.617556 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.620387 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.622386 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.622887 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.622741 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.623114 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.622794 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.622838 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.623603 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.623953 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.624723 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2fhkt"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.631936 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.632230 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.632434 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.632616 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.632667 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.632846 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.633176 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.634770 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.635911 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.636575 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.642744 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.643134 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.643445 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.649632 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.650064 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r7fm5"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.650857 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.656682 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6ph4l"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.657530 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ph4l" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.657887 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tp2pv"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.660001 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.660200 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.661665 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.662480 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.662751 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.663701 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x58kc"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.664671 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.665325 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.666839 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.667334 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.668094 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.668524 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.668575 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.669348 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.670061 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.670531 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.670706 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.671497 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.672753 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-22nrg"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.673175 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.674910 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.675567 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.676213 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.676751 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.677586 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.678196 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.679338 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lsn69"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.684690 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.686397 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.688156 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.688290 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.689479 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.690055 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.691149 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.691818 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.693171 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4hmsw"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.693764 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.694666 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.695685 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s56mb"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.696668 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r7fm5"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.697971 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.698975 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zh7hl"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.700035 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.700918 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6ph4l"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.702331 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qv5qk"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.704011 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vn4nn"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.704187 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qv5qk" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.705945 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3af14597-4b62-431a-939a-2e7c3592a896-etcd-client\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.707372 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mxvx\" (UniqueName: \"kubernetes.io/projected/fdee7aa6-3507-4d5c-8039-646b66ece997-kube-api-access-8mxvx\") pod \"ingress-operator-5b745b69d9-kzq46\" (UID: \"fdee7aa6-3507-4d5c-8039-646b66ece997\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.707580 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de-images\") pod \"machine-api-operator-5694c8668f-vn4nn\" (UID: \"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.707709 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1dce2ce-c431-4bc4-9f67-043f68609576-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2tb29\" (UID: \"d1dce2ce-c431-4bc4-9f67-043f68609576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.708375 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-service-ca\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.708584 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8a7de4-dae6-4dfa-afbf-656370147b87-config\") pod \"machine-approver-56656f9798-7ghtp\" (UID: \"cd8a7de4-dae6-4dfa-afbf-656370147b87\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.708775 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f9a97e9-9855-4a63-8d90-8ee30404ab5f-metrics-tls\") pod \"dns-operator-744455d44c-ggqts\" (UID: \"2f9a97e9-9855-4a63-8d90-8ee30404ab5f\") " pod="openshift-dns-operator/dns-operator-744455d44c-ggqts" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.708916 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-oauth-config\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.708955 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knlw2\" (UniqueName: \"kubernetes.io/projected/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-kube-api-access-knlw2\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.708974 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r57vg\" (UniqueName: \"kubernetes.io/projected/4efae636-8579-4696-b7a7-91c925fdca48-kube-api-access-r57vg\") pod \"migrator-59844c95c7-2q4n2\" (UID: \"4efae636-8579-4696-b7a7-91c925fdca48\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2q4n2" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709004 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gxjf\" (UniqueName: \"kubernetes.io/projected/26f1fca3-79fa-4717-8b2b-dbdad99057cc-kube-api-access-4gxjf\") pod \"downloads-7954f5f757-mz6w6\" (UID: \"26f1fca3-79fa-4717-8b2b-dbdad99057cc\") " pod="openshift-console/downloads-7954f5f757-mz6w6" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709027 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3af14597-4b62-431a-939a-2e7c3592a896-serving-cert\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709047 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709072 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-etcd-ca\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709092 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de-config\") pod \"machine-api-operator-5694c8668f-vn4nn\" (UID: \"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709118 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49xwf\" (UniqueName: \"kubernetes.io/projected/78cada77-daaa-4a63-acf4-12499986ea25-kube-api-access-49xwf\") pod \"openshift-config-operator-7777fb866f-dxkwc\" (UID: \"78cada77-daaa-4a63-acf4-12499986ea25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709138 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3af14597-4b62-431a-939a-2e7c3592a896-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709159 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbh69\" (UniqueName: \"kubernetes.io/projected/be973663-1808-4081-8531-a5a03e55eafb-kube-api-access-cbh69\") pod \"cluster-samples-operator-665b6dd947-25ssm\" (UID: \"be973663-1808-4081-8531-a5a03e55eafb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709180 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-client-ca\") pod \"route-controller-manager-6576b87f9c-nrd6m\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709207 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m244c\" (UniqueName: \"kubernetes.io/projected/d1dce2ce-c431-4bc4-9f67-043f68609576-kube-api-access-m244c\") pod \"openshift-apiserver-operator-796bbdcf4f-2tb29\" (UID: \"d1dce2ce-c431-4bc4-9f67-043f68609576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709230 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709267 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-serving-cert\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709293 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-audit-dir\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.709319 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd8a7de4-dae6-4dfa-afbf-656370147b87-auth-proxy-config\") pod \"machine-approver-56656f9798-7ghtp\" (UID: \"cd8a7de4-dae6-4dfa-afbf-656370147b87\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.710947 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6107cc-bc15-45b5-807a-a41c4ecefca6-config\") pod \"console-operator-58897d9998-4hmsw\" (UID: \"2d6107cc-bc15-45b5-807a-a41c4ecefca6\") " pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.711009 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/656ceec2-cb68-438c-a58f-5e64f3296cc6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4hs2n\" (UID: \"656ceec2-cb68-438c-a58f-5e64f3296cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.711050 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.711084 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-config\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.711127 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmvjx\" (UniqueName: \"kubernetes.io/projected/2f9a97e9-9855-4a63-8d90-8ee30404ab5f-kube-api-access-mmvjx\") pod \"dns-operator-744455d44c-ggqts\" (UID: \"2f9a97e9-9855-4a63-8d90-8ee30404ab5f\") " pod="openshift-dns-operator/dns-operator-744455d44c-ggqts" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.711210 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-serving-cert\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.711263 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3af14597-4b62-431a-939a-2e7c3592a896-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.710874 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd8a7de4-dae6-4dfa-afbf-656370147b87-config\") pod \"machine-approver-56656f9798-7ghtp\" (UID: \"cd8a7de4-dae6-4dfa-afbf-656370147b87\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.712219 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cd8a7de4-dae6-4dfa-afbf-656370147b87-auth-proxy-config\") pod \"machine-approver-56656f9798-7ghtp\" (UID: \"cd8a7de4-dae6-4dfa-afbf-656370147b87\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.712219 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3af14597-4b62-431a-939a-2e7c3592a896-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.712446 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d6107cc-bc15-45b5-807a-a41c4ecefca6-config\") pod \"console-operator-58897d9998-4hmsw\" (UID: \"2d6107cc-bc15-45b5-807a-a41c4ecefca6\") " pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.712510 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3af14597-4b62-431a-939a-2e7c3592a896-encryption-config\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.712556 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-etcd-service-ca\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.712583 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6445r\" (UniqueName: \"kubernetes.io/projected/656ceec2-cb68-438c-a58f-5e64f3296cc6-kube-api-access-6445r\") pod \"openshift-controller-manager-operator-756b6f6bc6-4hs2n\" (UID: \"656ceec2-cb68-438c-a58f-5e64f3296cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.713187 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3af14597-4b62-431a-939a-2e7c3592a896-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.713222 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-config\") pod \"route-controller-manager-6576b87f9c-nrd6m\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.713300 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8tfb\" (UniqueName: \"kubernetes.io/projected/cd8a7de4-dae6-4dfa-afbf-656370147b87-kube-api-access-k8tfb\") pod \"machine-approver-56656f9798-7ghtp\" (UID: \"cd8a7de4-dae6-4dfa-afbf-656370147b87\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.713339 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.713385 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3af14597-4b62-431a-939a-2e7c3592a896-audit-policies\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.713364 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.713767 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd763cbb-7bd3-4384-b30e-ee24938ed653-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-phnrg\" (UID: \"cd763cbb-7bd3-4384-b30e-ee24938ed653\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.713803 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vn4nn\" (UID: \"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.713842 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcdc7\" (UniqueName: \"kubernetes.io/projected/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-kube-api-access-dcdc7\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.713872 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-audit-policies\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.713897 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-etcd-client\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.714136 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1dce2ce-c431-4bc4-9f67-043f68609576-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2tb29\" (UID: \"d1dce2ce-c431-4bc4-9f67-043f68609576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.714261 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-audit-dir\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.714450 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3af14597-4b62-431a-939a-2e7c3592a896-audit-policies\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.714774 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.715190 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/be973663-1808-4081-8531-a5a03e55eafb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-25ssm\" (UID: \"be973663-1808-4081-8531-a5a03e55eafb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.715281 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-serving-cert\") pod \"route-controller-manager-6576b87f9c-nrd6m\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.715342 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdee7aa6-3507-4d5c-8039-646b66ece997-trusted-ca\") pod \"ingress-operator-5b745b69d9-kzq46\" (UID: \"fdee7aa6-3507-4d5c-8039-646b66ece997\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.716056 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3af14597-4b62-431a-939a-2e7c3592a896-etcd-client\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.716425 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78cada77-daaa-4a63-acf4-12499986ea25-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dxkwc\" (UID: \"78cada77-daaa-4a63-acf4-12499986ea25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.716502 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d4k9\" (UniqueName: \"kubernetes.io/projected/4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de-kube-api-access-5d4k9\") pod \"machine-api-operator-5694c8668f-vn4nn\" (UID: \"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.716835 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fdee7aa6-3507-4d5c-8039-646b66ece997-metrics-tls\") pod \"ingress-operator-5b745b69d9-kzq46\" (UID: \"fdee7aa6-3507-4d5c-8039-646b66ece997\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.716903 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/656ceec2-cb68-438c-a58f-5e64f3296cc6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4hs2n\" (UID: \"656ceec2-cb68-438c-a58f-5e64f3296cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.716956 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-trusted-ca-bundle\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.717023 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24444\" (UniqueName: \"kubernetes.io/projected/3af14597-4b62-431a-939a-2e7c3592a896-kube-api-access-24444\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.717525 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-audit-policies\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.717713 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78cada77-daaa-4a63-acf4-12499986ea25-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dxkwc\" (UID: \"78cada77-daaa-4a63-acf4-12499986ea25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.717119 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd763cbb-7bd3-4384-b30e-ee24938ed653-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-phnrg\" (UID: \"cd763cbb-7bd3-4384-b30e-ee24938ed653\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.718220 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.718415 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.718471 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-oauth-serving-cert\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.718615 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.718924 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqcsv\" (UniqueName: \"kubernetes.io/projected/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-kube-api-access-wqcsv\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.719013 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d6107cc-bc15-45b5-807a-a41c4ecefca6-trusted-ca\") pod \"console-operator-58897d9998-4hmsw\" (UID: \"2d6107cc-bc15-45b5-807a-a41c4ecefca6\") " pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.719066 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-config\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.719131 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fdee7aa6-3507-4d5c-8039-646b66ece997-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kzq46\" (UID: \"fdee7aa6-3507-4d5c-8039-646b66ece997\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.719187 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.719393 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78cada77-daaa-4a63-acf4-12499986ea25-serving-cert\") pod \"openshift-config-operator-7777fb866f-dxkwc\" (UID: \"78cada77-daaa-4a63-acf4-12499986ea25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.719432 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cd8a7de4-dae6-4dfa-afbf-656370147b87-machine-approver-tls\") pod \"machine-approver-56656f9798-7ghtp\" (UID: \"cd8a7de4-dae6-4dfa-afbf-656370147b87\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.719466 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd763cbb-7bd3-4384-b30e-ee24938ed653-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-phnrg\" (UID: \"cd763cbb-7bd3-4384-b30e-ee24938ed653\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.720076 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3af14597-4b62-431a-939a-2e7c3592a896-serving-cert\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.720538 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.720722 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72nlp\" (UniqueName: \"kubernetes.io/projected/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-kube-api-access-72nlp\") pod \"route-controller-manager-6576b87f9c-nrd6m\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.720782 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.721003 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.721497 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1dce2ce-c431-4bc4-9f67-043f68609576-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2tb29\" (UID: \"d1dce2ce-c431-4bc4-9f67-043f68609576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.721688 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.721961 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.722143 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.722017 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2d6107cc-bc15-45b5-807a-a41c4ecefca6-trusted-ca\") pod \"console-operator-58897d9998-4hmsw\" (UID: \"2d6107cc-bc15-45b5-807a-a41c4ecefca6\") " pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.722296 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td5kj\" (UniqueName: \"kubernetes.io/projected/cd763cbb-7bd3-4384-b30e-ee24938ed653-kube-api-access-td5kj\") pod \"cluster-image-registry-operator-dc59b4c8b-phnrg\" (UID: \"cd763cbb-7bd3-4384-b30e-ee24938ed653\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.722758 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3af14597-4b62-431a-939a-2e7c3592a896-audit-dir\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.722910 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6107cc-bc15-45b5-807a-a41c4ecefca6-serving-cert\") pod \"console-operator-58897d9998-4hmsw\" (UID: \"2d6107cc-bc15-45b5-807a-a41c4ecefca6\") " pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.723053 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-699s5\" (UniqueName: \"kubernetes.io/projected/2d6107cc-bc15-45b5-807a-a41c4ecefca6-kube-api-access-699s5\") pod \"console-operator-58897d9998-4hmsw\" (UID: \"2d6107cc-bc15-45b5-807a-a41c4ecefca6\") " pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.724218 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3af14597-4b62-431a-939a-2e7c3592a896-encryption-config\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.724278 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cd8a7de4-dae6-4dfa-afbf-656370147b87-machine-approver-tls\") pod \"machine-approver-56656f9798-7ghtp\" (UID: \"cd8a7de4-dae6-4dfa-afbf-656370147b87\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.724869 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3af14597-4b62-431a-939a-2e7c3592a896-audit-dir\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.725325 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ggqts"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.725630 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.725661 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.725829 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.725944 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.726076 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.726799 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/be973663-1808-4081-8531-a5a03e55eafb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-25ssm\" (UID: \"be973663-1808-4081-8531-a5a03e55eafb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.728695 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.728746 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2fhkt"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.728696 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.730225 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6107cc-bc15-45b5-807a-a41c4ecefca6-serving-cert\") pod \"console-operator-58897d9998-4hmsw\" (UID: \"2d6107cc-bc15-45b5-807a-a41c4ecefca6\") " pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.731092 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78cada77-daaa-4a63-acf4-12499986ea25-serving-cert\") pod \"openshift-config-operator-7777fb866f-dxkwc\" (UID: \"78cada77-daaa-4a63-acf4-12499986ea25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.731351 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.732390 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1dce2ce-c431-4bc4-9f67-043f68609576-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2tb29\" (UID: \"d1dce2ce-c431-4bc4-9f67-043f68609576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.733707 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2q4n2"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.735383 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.739511 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mz6w6"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.741890 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.746038 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.747933 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.748110 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-v9856"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.750545 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.753788 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.756190 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-22nrg"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.761137 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.762971 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x58kc"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.764165 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.766112 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tp2pv"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.768068 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.777807 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jdbtb"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.778848 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.780400 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.782439 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qv5qk"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.783469 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.784455 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tllsr"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.785767 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-94drr"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.785993 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.786332 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-94drr" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.786581 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tllsr"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.787887 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.788070 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-94drr"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.789322 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-n2bkb"] Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.789910 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-n2bkb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.808025 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.825558 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-etcd-ca\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.825691 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de-config\") pod \"machine-api-operator-5694c8668f-vn4nn\" (UID: \"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.825808 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-client-ca\") pod \"route-controller-manager-6576b87f9c-nrd6m\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.825926 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-serving-cert\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.826028 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/656ceec2-cb68-438c-a58f-5e64f3296cc6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4hs2n\" (UID: \"656ceec2-cb68-438c-a58f-5e64f3296cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.826142 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-config\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.826240 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmvjx\" (UniqueName: \"kubernetes.io/projected/2f9a97e9-9855-4a63-8d90-8ee30404ab5f-kube-api-access-mmvjx\") pod \"dns-operator-744455d44c-ggqts\" (UID: \"2f9a97e9-9855-4a63-8d90-8ee30404ab5f\") " pod="openshift-dns-operator/dns-operator-744455d44c-ggqts" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.826391 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-serving-cert\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.826496 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-etcd-service-ca\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.826598 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6445r\" (UniqueName: \"kubernetes.io/projected/656ceec2-cb68-438c-a58f-5e64f3296cc6-kube-api-access-6445r\") pod \"openshift-controller-manager-operator-756b6f6bc6-4hs2n\" (UID: \"656ceec2-cb68-438c-a58f-5e64f3296cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.826695 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-config\") pod \"route-controller-manager-6576b87f9c-nrd6m\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.826826 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd763cbb-7bd3-4384-b30e-ee24938ed653-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-phnrg\" (UID: \"cd763cbb-7bd3-4384-b30e-ee24938ed653\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.826947 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vn4nn\" (UID: \"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.827052 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcdc7\" (UniqueName: \"kubernetes.io/projected/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-kube-api-access-dcdc7\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.827152 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-etcd-client\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.827305 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-serving-cert\") pod \"route-controller-manager-6576b87f9c-nrd6m\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.827413 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdee7aa6-3507-4d5c-8039-646b66ece997-trusted-ca\") pod \"ingress-operator-5b745b69d9-kzq46\" (UID: \"fdee7aa6-3507-4d5c-8039-646b66ece997\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.827510 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d4k9\" (UniqueName: \"kubernetes.io/projected/4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de-kube-api-access-5d4k9\") pod \"machine-api-operator-5694c8668f-vn4nn\" (UID: \"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.827642 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-trusted-ca-bundle\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.827746 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fdee7aa6-3507-4d5c-8039-646b66ece997-metrics-tls\") pod \"ingress-operator-5b745b69d9-kzq46\" (UID: \"fdee7aa6-3507-4d5c-8039-646b66ece997\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.827840 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/656ceec2-cb68-438c-a58f-5e64f3296cc6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4hs2n\" (UID: \"656ceec2-cb68-438c-a58f-5e64f3296cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.827950 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd763cbb-7bd3-4384-b30e-ee24938ed653-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-phnrg\" (UID: \"cd763cbb-7bd3-4384-b30e-ee24938ed653\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.828070 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-oauth-serving-cert\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.828175 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-config\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.828338 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fdee7aa6-3507-4d5c-8039-646b66ece997-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kzq46\" (UID: \"fdee7aa6-3507-4d5c-8039-646b66ece997\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.828476 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd763cbb-7bd3-4384-b30e-ee24938ed653-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-phnrg\" (UID: \"cd763cbb-7bd3-4384-b30e-ee24938ed653\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.828573 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72nlp\" (UniqueName: \"kubernetes.io/projected/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-kube-api-access-72nlp\") pod \"route-controller-manager-6576b87f9c-nrd6m\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.828691 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td5kj\" (UniqueName: \"kubernetes.io/projected/cd763cbb-7bd3-4384-b30e-ee24938ed653-kube-api-access-td5kj\") pod \"cluster-image-registry-operator-dc59b4c8b-phnrg\" (UID: \"cd763cbb-7bd3-4384-b30e-ee24938ed653\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.828853 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mxvx\" (UniqueName: \"kubernetes.io/projected/fdee7aa6-3507-4d5c-8039-646b66ece997-kube-api-access-8mxvx\") pod \"ingress-operator-5b745b69d9-kzq46\" (UID: \"fdee7aa6-3507-4d5c-8039-646b66ece997\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.828955 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de-images\") pod \"machine-api-operator-5694c8668f-vn4nn\" (UID: \"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.829057 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f9a97e9-9855-4a63-8d90-8ee30404ab5f-metrics-tls\") pod \"dns-operator-744455d44c-ggqts\" (UID: \"2f9a97e9-9855-4a63-8d90-8ee30404ab5f\") " pod="openshift-dns-operator/dns-operator-744455d44c-ggqts" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.829154 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-oauth-config\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.829269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de-config\") pod \"machine-api-operator-5694c8668f-vn4nn\" (UID: \"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.829278 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-service-ca\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.829451 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knlw2\" (UniqueName: \"kubernetes.io/projected/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-kube-api-access-knlw2\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.829490 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r57vg\" (UniqueName: \"kubernetes.io/projected/4efae636-8579-4696-b7a7-91c925fdca48-kube-api-access-r57vg\") pod \"migrator-59844c95c7-2q4n2\" (UID: \"4efae636-8579-4696-b7a7-91c925fdca48\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2q4n2" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.830446 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-service-ca\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.831667 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de-images\") pod \"machine-api-operator-5694c8668f-vn4nn\" (UID: \"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.832923 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/656ceec2-cb68-438c-a58f-5e64f3296cc6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4hs2n\" (UID: \"656ceec2-cb68-438c-a58f-5e64f3296cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.833811 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-trusted-ca-bundle\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.835091 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vn4nn\" (UID: \"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.835433 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2f9a97e9-9855-4a63-8d90-8ee30404ab5f-metrics-tls\") pod \"dns-operator-744455d44c-ggqts\" (UID: \"2f9a97e9-9855-4a63-8d90-8ee30404ab5f\") " pod="openshift-dns-operator/dns-operator-744455d44c-ggqts" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.837108 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-client-ca\") pod \"route-controller-manager-6576b87f9c-nrd6m\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.837144 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-etcd-client\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.826416 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-etcd-ca\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.837675 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/656ceec2-cb68-438c-a58f-5e64f3296cc6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4hs2n\" (UID: \"656ceec2-cb68-438c-a58f-5e64f3296cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.837988 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-config\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.838028 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-oauth-serving-cert\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.838350 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-serving-cert\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.838859 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fdee7aa6-3507-4d5c-8039-646b66ece997-trusted-ca\") pod \"ingress-operator-5b745b69d9-kzq46\" (UID: \"fdee7aa6-3507-4d5c-8039-646b66ece997\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.839081 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-oauth-config\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.839345 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fdee7aa6-3507-4d5c-8039-646b66ece997-metrics-tls\") pod \"ingress-operator-5b745b69d9-kzq46\" (UID: \"fdee7aa6-3507-4d5c-8039-646b66ece997\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.842158 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-serving-cert\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.843106 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd763cbb-7bd3-4384-b30e-ee24938ed653-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-phnrg\" (UID: \"cd763cbb-7bd3-4384-b30e-ee24938ed653\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.843399 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.845784 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-serving-cert\") pod \"route-controller-manager-6576b87f9c-nrd6m\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.846747 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-config\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.846763 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-config\") pod \"route-controller-manager-6576b87f9c-nrd6m\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.847300 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cd763cbb-7bd3-4384-b30e-ee24938ed653-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-phnrg\" (UID: \"cd763cbb-7bd3-4384-b30e-ee24938ed653\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.847958 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-etcd-service-ca\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.848188 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.908007 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.928213 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.948390 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.968836 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 18:36:18 crc kubenswrapper[4768]: I0223 18:36:18.995768 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.008948 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.028144 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.047939 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.068735 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.088639 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.109465 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.128806 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.160711 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.169104 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.189874 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.208442 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.229300 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.248896 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.269666 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.289880 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.309564 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.329128 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.348784 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.368820 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.389135 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.409131 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.429121 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.449220 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.468968 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.488555 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.508618 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.528443 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.548939 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.568558 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.602660 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.608694 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.629548 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.649825 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.667195 4768 request.go:700] Waited for 1.002178094s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0 Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.669817 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.689420 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.709756 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.728915 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.749160 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.769495 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.789166 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.809162 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.828700 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.848425 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.869323 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.889684 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.908805 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.928983 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.949425 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.969105 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 23 18:36:19 crc kubenswrapper[4768]: I0223 18:36:19.989830 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.008508 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.029352 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.049859 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.069076 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.089674 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.109048 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.129953 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.149118 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.168698 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.188687 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.208514 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.229533 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.250092 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.269399 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.288957 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.309046 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.328630 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.348634 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.395807 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gxjf\" (UniqueName: \"kubernetes.io/projected/26f1fca3-79fa-4717-8b2b-dbdad99057cc-kube-api-access-4gxjf\") pod \"downloads-7954f5f757-mz6w6\" (UID: \"26f1fca3-79fa-4717-8b2b-dbdad99057cc\") " pod="openshift-console/downloads-7954f5f757-mz6w6" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.414202 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbh69\" (UniqueName: \"kubernetes.io/projected/be973663-1808-4081-8531-a5a03e55eafb-kube-api-access-cbh69\") pod \"cluster-samples-operator-665b6dd947-25ssm\" (UID: \"be973663-1808-4081-8531-a5a03e55eafb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.435400 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.441195 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8tfb\" (UniqueName: \"kubernetes.io/projected/cd8a7de4-dae6-4dfa-afbf-656370147b87-kube-api-access-k8tfb\") pod \"machine-approver-56656f9798-7ghtp\" (UID: \"cd8a7de4-dae6-4dfa-afbf-656370147b87\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.457568 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mz6w6" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.459372 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m244c\" (UniqueName: \"kubernetes.io/projected/d1dce2ce-c431-4bc4-9f67-043f68609576-kube-api-access-m244c\") pod \"openshift-apiserver-operator-796bbdcf4f-2tb29\" (UID: \"d1dce2ce-c431-4bc4-9f67-043f68609576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.471332 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.472117 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49xwf\" (UniqueName: \"kubernetes.io/projected/78cada77-daaa-4a63-acf4-12499986ea25-kube-api-access-49xwf\") pod \"openshift-config-operator-7777fb866f-dxkwc\" (UID: \"78cada77-daaa-4a63-acf4-12499986ea25\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" Feb 23 18:36:20 crc kubenswrapper[4768]: W0223 18:36:20.499299 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd8a7de4_dae6_4dfa_afbf_656370147b87.slice/crio-febd325839f82ad3a844f5b5ff6ab01c94d4a19da5641b6ce97fb24991a07fc9 WatchSource:0}: Error finding container febd325839f82ad3a844f5b5ff6ab01c94d4a19da5641b6ce97fb24991a07fc9: Status 404 returned error can't find the container with id febd325839f82ad3a844f5b5ff6ab01c94d4a19da5641b6ce97fb24991a07fc9 Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.504341 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24444\" (UniqueName: \"kubernetes.io/projected/3af14597-4b62-431a-939a-2e7c3592a896-kube-api-access-24444\") pod \"apiserver-7bbb656c7d-sqjgc\" (UID: \"3af14597-4b62-431a-939a-2e7c3592a896\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.515090 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqcsv\" (UniqueName: \"kubernetes.io/projected/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-kube-api-access-wqcsv\") pod \"oauth-openshift-558db77b4-lsn69\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.528874 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.534967 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-699s5\" (UniqueName: \"kubernetes.io/projected/2d6107cc-bc15-45b5-807a-a41c4ecefca6-kube-api-access-699s5\") pod \"console-operator-58897d9998-4hmsw\" (UID: \"2d6107cc-bc15-45b5-807a-a41c4ecefca6\") " pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.549119 4768 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.571925 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.588318 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.609611 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.614486 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.625892 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.627539 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.649238 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.669112 4768 request.go:700] Waited for 1.878976691s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.672172 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.674587 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.689355 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.704623 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mz6w6"] Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.705309 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.722435 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm"] Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.731005 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r57vg\" (UniqueName: \"kubernetes.io/projected/4efae636-8579-4696-b7a7-91c925fdca48-kube-api-access-r57vg\") pod \"migrator-59844c95c7-2q4n2\" (UID: \"4efae636-8579-4696-b7a7-91c925fdca48\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2q4n2" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.744001 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.747388 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knlw2\" (UniqueName: \"kubernetes.io/projected/776d3448-3eb2-4f0f-b534-3a0b1df1ebe8-kube-api-access-knlw2\") pod \"etcd-operator-b45778765-s56mb\" (UID: \"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8\") " pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.765168 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d4k9\" (UniqueName: \"kubernetes.io/projected/4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de-kube-api-access-5d4k9\") pod \"machine-api-operator-5694c8668f-vn4nn\" (UID: \"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.784117 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.792472 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcdc7\" (UniqueName: \"kubernetes.io/projected/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-kube-api-access-dcdc7\") pod \"console-f9d7485db-v9856\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.802382 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fdee7aa6-3507-4d5c-8039-646b66ece997-bound-sa-token\") pod \"ingress-operator-5b745b69d9-kzq46\" (UID: \"fdee7aa6-3507-4d5c-8039-646b66ece997\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.829811 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cd763cbb-7bd3-4384-b30e-ee24938ed653-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-phnrg\" (UID: \"cd763cbb-7bd3-4384-b30e-ee24938ed653\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.851337 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72nlp\" (UniqueName: \"kubernetes.io/projected/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-kube-api-access-72nlp\") pod \"route-controller-manager-6576b87f9c-nrd6m\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.859215 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.865608 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td5kj\" (UniqueName: \"kubernetes.io/projected/cd763cbb-7bd3-4384-b30e-ee24938ed653-kube-api-access-td5kj\") pod \"cluster-image-registry-operator-dc59b4c8b-phnrg\" (UID: \"cd763cbb-7bd3-4384-b30e-ee24938ed653\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.883644 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4hmsw"] Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.885692 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2q4n2" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.899150 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mxvx\" (UniqueName: \"kubernetes.io/projected/fdee7aa6-3507-4d5c-8039-646b66ece997-kube-api-access-8mxvx\") pod \"ingress-operator-5b745b69d9-kzq46\" (UID: \"fdee7aa6-3507-4d5c-8039-646b66ece997\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.899316 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.905508 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc"] Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.910887 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmvjx\" (UniqueName: \"kubernetes.io/projected/2f9a97e9-9855-4a63-8d90-8ee30404ab5f-kube-api-access-mmvjx\") pod \"dns-operator-744455d44c-ggqts\" (UID: \"2f9a97e9-9855-4a63-8d90-8ee30404ab5f\") " pod="openshift-dns-operator/dns-operator-744455d44c-ggqts" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.931589 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6445r\" (UniqueName: \"kubernetes.io/projected/656ceec2-cb68-438c-a58f-5e64f3296cc6-kube-api-access-6445r\") pod \"openshift-controller-manager-operator-756b6f6bc6-4hs2n\" (UID: \"656ceec2-cb68-438c-a58f-5e64f3296cc6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.951349 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc"] Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973535 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-service-ca-bundle\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973572 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2b4w\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-kube-api-access-j2b4w\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973589 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-default-certificate\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973617 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-serving-cert\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973634 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bee9fc28-f46f-41fe-86e9-b14cdead9120-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973653 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-bound-sa-token\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973671 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-service-ca-bundle\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973709 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcs8l\" (UniqueName: \"kubernetes.io/projected/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-kube-api-access-kcs8l\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973736 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-metrics-certs\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973756 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmqbd\" (UniqueName: \"kubernetes.io/projected/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-kube-api-access-zmqbd\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973793 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973810 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-registry-tls\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973830 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bee9fc28-f46f-41fe-86e9-b14cdead9120-registry-certificates\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973845 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bee9fc28-f46f-41fe-86e9-b14cdead9120-trusted-ca\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973871 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-config\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973888 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bee9fc28-f46f-41fe-86e9-b14cdead9120-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973904 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-stats-auth\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.973923 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:20 crc kubenswrapper[4768]: E0223 18:36:20.974292 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:21.474279985 +0000 UTC m=+176.864765775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:20 crc kubenswrapper[4768]: W0223 18:36:20.990612 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78cada77_daaa_4a63_acf4_12499986ea25.slice/crio-f79e926e1f56a1d86a7c64ded10942d28b827aac5f7e9ded1e41905e5c26ca04 WatchSource:0}: Error finding container f79e926e1f56a1d86a7c64ded10942d28b827aac5f7e9ded1e41905e5c26ca04: Status 404 returned error can't find the container with id f79e926e1f56a1d86a7c64ded10942d28b827aac5f7e9ded1e41905e5c26ca04 Feb 23 18:36:20 crc kubenswrapper[4768]: I0223 18:36:20.995321 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-s56mb"] Feb 23 18:36:21 crc kubenswrapper[4768]: W0223 18:36:21.011235 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod776d3448_3eb2_4f0f_b534_3a0b1df1ebe8.slice/crio-51f2dae56c1d52b7eabd6ff540c623cff1a19b2f79073e060bc0b7c435ac0533 WatchSource:0}: Error finding container 51f2dae56c1d52b7eabd6ff540c623cff1a19b2f79073e060bc0b7c435ac0533: Status 404 returned error can't find the container with id 51f2dae56c1d52b7eabd6ff540c623cff1a19b2f79073e060bc0b7c435ac0533 Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.075501 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.078046 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.088597 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lsn69"] Feb 23 18:36:21 crc kubenswrapper[4768]: E0223 18:36:21.091665 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:21.584392172 +0000 UTC m=+176.974878052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.091766 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/84338b9c-fbd7-4987-95d7-21a4d09e2b05-profile-collector-cert\") pod \"catalog-operator-68c6474976-q87hn\" (UID: \"84338b9c-fbd7-4987-95d7-21a4d09e2b05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.091837 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bee9fc28-f46f-41fe-86e9-b14cdead9120-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.091901 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-stats-auth\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.091953 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed12b20a-27bb-4560-a46e-68302e06f373-config\") pod \"kube-apiserver-operator-766d6c64bb-n5vz6\" (UID: \"ed12b20a-27bb-4560-a46e-68302e06f373\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.091985 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc337bf7-2539-47f2-a100-a0e47b747abc-serving-cert\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092099 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpnhv\" (UniqueName: \"kubernetes.io/projected/4ddb8168-a234-4a98-9feb-3301169affe9-kube-api-access-lpnhv\") pod \"package-server-manager-789f6589d5-9blhj\" (UID: \"4ddb8168-a234-4a98-9feb-3301169affe9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092169 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121-certs\") pod \"machine-config-server-n2bkb\" (UID: \"5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121\") " pod="openshift-machine-config-operator/machine-config-server-n2bkb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092205 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-serving-cert\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092229 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmzv5\" (UniqueName: \"kubernetes.io/projected/30b720f3-fda0-41f1-bca9-e52fe84a3535-kube-api-access-bmzv5\") pod \"control-plane-machine-set-operator-78cbb6b69f-6qzzx\" (UID: \"30b720f3-fda0-41f1-bca9-e52fe84a3535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092277 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-csi-data-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092329 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-bound-sa-token\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092363 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-config\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092381 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff00ff81-a3dd-477f-98c7-a99d0d462f57-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bdws4\" (UID: \"ff00ff81-a3dd-477f-98c7-a99d0d462f57\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092404 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-image-import-ca\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092432 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121-node-bootstrap-token\") pod \"machine-config-server-n2bkb\" (UID: \"5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121\") " pod="openshift-machine-config-operator/machine-config-server-n2bkb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092466 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-registration-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092498 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dc337bf7-2539-47f2-a100-a0e47b747abc-etcd-client\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092524 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dc337bf7-2539-47f2-a100-a0e47b747abc-encryption-config\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092551 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/794d73ce-3e95-4492-a64d-4ef84a11d014-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b8c6c\" (UID: \"794d73ce-3e95-4492-a64d-4ef84a11d014\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092578 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1496fda2-941d-48a9-8bdd-05ee6f0d235a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6h7t4\" (UID: \"1496fda2-941d-48a9-8bdd-05ee6f0d235a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092607 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpp2x\" (UniqueName: \"kubernetes.io/projected/1496fda2-941d-48a9-8bdd-05ee6f0d235a-kube-api-access-qpp2x\") pod \"machine-config-controller-84d6567774-6h7t4\" (UID: \"1496fda2-941d-48a9-8bdd-05ee6f0d235a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092677 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-client-ca\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092730 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-metrics-certs\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092784 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8698eec3-b444-4838-bfff-36fb054e8578-webhook-cert\") pod \"packageserver-d55dfcdfc-pg8m4\" (UID: \"8698eec3-b444-4838-bfff-36fb054e8578\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092818 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a491b22c-d857-469a-830d-791d53b4ccad-profile-collector-cert\") pod \"olm-operator-6b444d44fb-57sqf\" (UID: \"a491b22c-d857-469a-830d-791d53b4ccad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092848 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/30b720f3-fda0-41f1-bca9-e52fe84a3535-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6qzzx\" (UID: \"30b720f3-fda0-41f1-bca9-e52fe84a3535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092883 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-etcd-serving-ca\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092938 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/84338b9c-fbd7-4987-95d7-21a4d09e2b05-srv-cert\") pod \"catalog-operator-68c6474976-q87hn\" (UID: \"84338b9c-fbd7-4987-95d7-21a4d09e2b05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.092988 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8698eec3-b444-4838-bfff-36fb054e8578-apiservice-cert\") pod \"packageserver-d55dfcdfc-pg8m4\" (UID: \"8698eec3-b444-4838-bfff-36fb054e8578\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.093021 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7xcq\" (UniqueName: \"kubernetes.io/projected/84338b9c-fbd7-4987-95d7-21a4d09e2b05-kube-api-access-k7xcq\") pod \"catalog-operator-68c6474976-q87hn\" (UID: \"84338b9c-fbd7-4987-95d7-21a4d09e2b05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.093050 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62bc1d94-f1c8-4c29-ab4f-becc5775876a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8lwx\" (UID: \"62bc1d94-f1c8-4c29-ab4f-becc5775876a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.093075 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmqbd\" (UniqueName: \"kubernetes.io/projected/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-kube-api-access-zmqbd\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.093100 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bcmg\" (UniqueName: \"kubernetes.io/projected/3cff9f42-aeae-4c76-a542-75cc5c37254a-kube-api-access-6bcmg\") pod \"marketplace-operator-79b997595-r7fm5\" (UID: \"3cff9f42-aeae-4c76-a542-75cc5c37254a\") " pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.093162 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62bc1d94-f1c8-4c29-ab4f-becc5775876a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8lwx\" (UID: \"62bc1d94-f1c8-4c29-ab4f-becc5775876a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.093205 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.093227 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-registry-tls\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.094877 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bee9fc28-f46f-41fe-86e9-b14cdead9120-ca-trust-extracted\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.096662 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29"] Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.100553 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/63b7fc38-b496-4673-9912-0b7c1018962b-proxy-tls\") pod \"machine-config-operator-74547568cd-24p7p\" (UID: \"63b7fc38-b496-4673-9912-0b7c1018962b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.100714 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrxvc\" (UniqueName: \"kubernetes.io/projected/5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121-kube-api-access-lrxvc\") pod \"machine-config-server-n2bkb\" (UID: \"5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121\") " pod="openshift-machine-config-operator/machine-config-server-n2bkb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.101214 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-audit\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: E0223 18:36:21.101473 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:21.601429381 +0000 UTC m=+176.991915171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.101563 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddb8168-a234-4a98-9feb-3301169affe9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9blhj\" (UID: \"4ddb8168-a234-4a98-9feb-3301169affe9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.101598 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a491b22c-d857-469a-830d-791d53b4ccad-srv-cert\") pod \"olm-operator-6b444d44fb-57sqf\" (UID: \"a491b22c-d857-469a-830d-791d53b4ccad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.101689 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff00ff81-a3dd-477f-98c7-a99d0d462f57-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bdws4\" (UID: \"ff00ff81-a3dd-477f-98c7-a99d0d462f57\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.101741 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bee9fc28-f46f-41fe-86e9-b14cdead9120-registry-certificates\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.101791 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed12b20a-27bb-4560-a46e-68302e06f373-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n5vz6\" (UID: \"ed12b20a-27bb-4560-a46e-68302e06f373\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.101814 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpcrj\" (UniqueName: \"kubernetes.io/projected/4bc8243b-25c6-4117-b4eb-470f5ba127e3-kube-api-access-xpcrj\") pod \"dns-default-94drr\" (UID: \"4bc8243b-25c6-4117-b4eb-470f5ba127e3\") " pod="openshift-dns/dns-default-94drr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.101874 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bee9fc28-f46f-41fe-86e9-b14cdead9120-trusted-ca\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.101910 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/15b52cf1-2c86-4747-9be1-a690a5a125ca-signing-key\") pod \"service-ca-9c57cc56f-tp2pv\" (UID: \"15b52cf1-2c86-4747-9be1-a690a5a125ca\") " pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.101957 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/63b7fc38-b496-4673-9912-0b7c1018962b-images\") pod \"machine-config-operator-74547568cd-24p7p\" (UID: \"63b7fc38-b496-4673-9912-0b7c1018962b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.101974 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bc8243b-25c6-4117-b4eb-470f5ba127e3-config-volume\") pod \"dns-default-94drr\" (UID: \"4bc8243b-25c6-4117-b4eb-470f5ba127e3\") " pod="openshift-dns/dns-default-94drr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.101994 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1609f0af-9131-40b0-8723-ed972292effb-config\") pod \"service-ca-operator-777779d784-22nrg\" (UID: \"1609f0af-9131-40b0-8723-ed972292effb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.103303 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zgbt\" (UniqueName: \"kubernetes.io/projected/ff00ff81-a3dd-477f-98c7-a99d0d462f57-kube-api-access-8zgbt\") pod \"kube-storage-version-migrator-operator-b67b599dd-bdws4\" (UID: \"ff00ff81-a3dd-477f-98c7-a99d0d462f57\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.103344 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-config\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.103431 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-config\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.103498 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjfft\" (UniqueName: \"kubernetes.io/projected/dc337bf7-2539-47f2-a100-a0e47b747abc-kube-api-access-cjfft\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.103749 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1496fda2-941d-48a9-8bdd-05ee6f0d235a-proxy-tls\") pod \"machine-config-controller-84d6567774-6h7t4\" (UID: \"1496fda2-941d-48a9-8bdd-05ee6f0d235a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.104183 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-config\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.104180 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbed104c-291d-45f5-b41d-99814829422e-secret-volume\") pod \"collect-profiles-29531190-g7gns\" (UID: \"dbed104c-291d-45f5-b41d-99814829422e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.104229 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc337bf7-2539-47f2-a100-a0e47b747abc-node-pullsecrets\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.104272 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qpd8\" (UniqueName: \"kubernetes.io/projected/446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61-kube-api-access-6qpd8\") pod \"multus-admission-controller-857f4d67dd-6ph4l\" (UID: \"446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ph4l" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.104611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.104792 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-socket-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.104871 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dc337bf7-2539-47f2-a100-a0e47b747abc-audit-dir\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.104959 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-service-ca-bundle\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.104992 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbed104c-291d-45f5-b41d-99814829422e-config-volume\") pod \"collect-profiles-29531190-g7gns\" (UID: \"dbed104c-291d-45f5-b41d-99814829422e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.105051 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/794d73ce-3e95-4492-a64d-4ef84a11d014-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b8c6c\" (UID: \"794d73ce-3e95-4492-a64d-4ef84a11d014\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.105122 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794d73ce-3e95-4492-a64d-4ef84a11d014-config\") pod \"kube-controller-manager-operator-78b949d7b-b8c6c\" (UID: \"794d73ce-3e95-4492-a64d-4ef84a11d014\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.105155 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.105205 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed12b20a-27bb-4560-a46e-68302e06f373-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n5vz6\" (UID: \"ed12b20a-27bb-4560-a46e-68302e06f373\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.105306 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3cff9f42-aeae-4c76-a542-75cc5c37254a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r7fm5\" (UID: \"3cff9f42-aeae-4c76-a542-75cc5c37254a\") " pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.105994 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.106068 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r8tm\" (UniqueName: \"kubernetes.io/projected/a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc-kube-api-access-6r8tm\") pod \"ingress-canary-qv5qk\" (UID: \"a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc\") " pod="openshift-ingress-canary/ingress-canary-qv5qk" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.106198 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2b4w\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-kube-api-access-j2b4w\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.106240 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-plugins-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.106300 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tprc\" (UniqueName: \"kubernetes.io/projected/63b7fc38-b496-4673-9912-0b7c1018962b-kube-api-access-2tprc\") pod \"machine-config-operator-74547568cd-24p7p\" (UID: \"63b7fc38-b496-4673-9912-0b7c1018962b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.106325 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9tq\" (UniqueName: \"kubernetes.io/projected/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-kube-api-access-8x9tq\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.106580 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-default-certificate\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.106611 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-service-ca-bundle\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.106853 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.106955 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bee9fc28-f46f-41fe-86e9-b14cdead9120-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.106978 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-service-ca-bundle\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.107002 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3cff9f42-aeae-4c76-a542-75cc5c37254a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r7fm5\" (UID: \"3cff9f42-aeae-4c76-a542-75cc5c37254a\") " pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.107024 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kcjj\" (UniqueName: \"kubernetes.io/projected/dbed104c-291d-45f5-b41d-99814829422e-kube-api-access-5kcjj\") pod \"collect-profiles-29531190-g7gns\" (UID: \"dbed104c-291d-45f5-b41d-99814829422e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.107045 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1609f0af-9131-40b0-8723-ed972292effb-serving-cert\") pod \"service-ca-operator-777779d784-22nrg\" (UID: \"1609f0af-9131-40b0-8723-ed972292effb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.107432 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mj59\" (UniqueName: \"kubernetes.io/projected/1609f0af-9131-40b0-8723-ed972292effb-kube-api-access-5mj59\") pod \"service-ca-operator-777779d784-22nrg\" (UID: \"1609f0af-9131-40b0-8723-ed972292effb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.107496 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qx7x\" (UniqueName: \"kubernetes.io/projected/8698eec3-b444-4838-bfff-36fb054e8578-kube-api-access-4qx7x\") pod \"packageserver-d55dfcdfc-pg8m4\" (UID: \"8698eec3-b444-4838-bfff-36fb054e8578\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.107518 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xb6p\" (UniqueName: \"kubernetes.io/projected/a491b22c-d857-469a-830d-791d53b4ccad-kube-api-access-7xb6p\") pod \"olm-operator-6b444d44fb-57sqf\" (UID: \"a491b22c-d857-469a-830d-791d53b4ccad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.107552 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcs8l\" (UniqueName: \"kubernetes.io/projected/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-kube-api-access-kcs8l\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.107650 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/63b7fc38-b496-4673-9912-0b7c1018962b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-24p7p\" (UID: \"63b7fc38-b496-4673-9912-0b7c1018962b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.107673 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-serving-cert\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.107737 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkl5x\" (UniqueName: \"kubernetes.io/projected/15b52cf1-2c86-4747-9be1-a690a5a125ca-kube-api-access-wkl5x\") pod \"service-ca-9c57cc56f-tp2pv\" (UID: \"15b52cf1-2c86-4747-9be1-a690a5a125ca\") " pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.107757 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8698eec3-b444-4838-bfff-36fb054e8578-tmpfs\") pod \"packageserver-d55dfcdfc-pg8m4\" (UID: \"8698eec3-b444-4838-bfff-36fb054e8578\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.107778 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j26j\" (UniqueName: \"kubernetes.io/projected/056782af-3e2e-4c24-a9c6-28c7acf1834b-kube-api-access-6j26j\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.108441 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bee9fc28-f46f-41fe-86e9-b14cdead9120-registry-certificates\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.109267 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-stats-auth\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.109838 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-service-ca-bundle\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.110154 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bee9fc28-f46f-41fe-86e9-b14cdead9120-trusted-ca\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.110490 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62bc1d94-f1c8-4c29-ab4f-becc5775876a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8lwx\" (UID: \"62bc1d94-f1c8-4c29-ab4f-becc5775876a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.110542 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/15b52cf1-2c86-4747-9be1-a690a5a125ca-signing-cabundle\") pod \"service-ca-9c57cc56f-tp2pv\" (UID: \"15b52cf1-2c86-4747-9be1-a690a5a125ca\") " pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.110574 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bc8243b-25c6-4117-b4eb-470f5ba127e3-metrics-tls\") pod \"dns-default-94drr\" (UID: \"4bc8243b-25c6-4117-b4eb-470f5ba127e3\") " pod="openshift-dns/dns-default-94drr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.110913 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc-cert\") pod \"ingress-canary-qv5qk\" (UID: \"a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc\") " pod="openshift-ingress-canary/ingress-canary-qv5qk" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.111168 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.111610 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-default-certificate\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.111782 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6ph4l\" (UID: \"446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ph4l" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.111850 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-mountpoint-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.112178 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-registry-tls\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.112539 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bee9fc28-f46f-41fe-86e9-b14cdead9120-installation-pull-secrets\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.119162 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-metrics-certs\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.120788 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-serving-cert\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:21 crc kubenswrapper[4768]: W0223 18:36:21.133565 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31ebc831_fd3d_4dfa_8b67_a0fa553b3472.slice/crio-a84a0edd9752ac55ab597f6cc88af5ff05fa88c99f8fb4577953434be7732e3d WatchSource:0}: Error finding container a84a0edd9752ac55ab597f6cc88af5ff05fa88c99f8fb4577953434be7732e3d: Status 404 returned error can't find the container with id a84a0edd9752ac55ab597f6cc88af5ff05fa88c99f8fb4577953434be7732e3d Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.140041 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-v9856"] Feb 23 18:36:21 crc kubenswrapper[4768]: W0223 18:36:21.144382 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1dce2ce_c431_4bc4_9f67_043f68609576.slice/crio-2e95e011c169f2e0107b96faed3ebeb7c3f4dbcf60b78b23069ef41e3f069870 WatchSource:0}: Error finding container 2e95e011c169f2e0107b96faed3ebeb7c3f4dbcf60b78b23069ef41e3f069870: Status 404 returned error can't find the container with id 2e95e011c169f2e0107b96faed3ebeb7c3f4dbcf60b78b23069ef41e3f069870 Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.150321 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmqbd\" (UniqueName: \"kubernetes.io/projected/587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8-kube-api-access-zmqbd\") pod \"authentication-operator-69f744f599-zh7hl\" (UID: \"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.153202 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.165032 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-bound-sa-token\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.176085 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.178921 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-ggqts" Feb 23 18:36:21 crc kubenswrapper[4768]: W0223 18:36:21.180806 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc269d15e_90d0_47d8_b2bd_f5785fa1a69b.slice/crio-be51b2fcf4f2dccb28b14e4b0aa8e73f50539866b1cc0a76753f64d0fb41c086 WatchSource:0}: Error finding container be51b2fcf4f2dccb28b14e4b0aa8e73f50539866b1cc0a76753f64d0fb41c086: Status 404 returned error can't find the container with id be51b2fcf4f2dccb28b14e4b0aa8e73f50539866b1cc0a76753f64d0fb41c086 Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.189154 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcs8l\" (UniqueName: \"kubernetes.io/projected/51c3071b-8dc3-402a-8f3e-a89fa71f4a54-kube-api-access-kcs8l\") pod \"router-default-5444994796-nnn8b\" (UID: \"51c3071b-8dc3-402a-8f3e-a89fa71f4a54\") " pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.192460 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.198911 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vn4nn"] Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.208436 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2b4w\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-kube-api-access-j2b4w\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.209098 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214272 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214455 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-csi-data-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214486 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-config\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214504 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff00ff81-a3dd-477f-98c7-a99d0d462f57-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bdws4\" (UID: \"ff00ff81-a3dd-477f-98c7-a99d0d462f57\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214524 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-image-import-ca\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214540 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121-node-bootstrap-token\") pod \"machine-config-server-n2bkb\" (UID: \"5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121\") " pod="openshift-machine-config-operator/machine-config-server-n2bkb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214556 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-registration-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214571 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dc337bf7-2539-47f2-a100-a0e47b747abc-encryption-config\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214591 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dc337bf7-2539-47f2-a100-a0e47b747abc-etcd-client\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/794d73ce-3e95-4492-a64d-4ef84a11d014-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b8c6c\" (UID: \"794d73ce-3e95-4492-a64d-4ef84a11d014\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214626 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1496fda2-941d-48a9-8bdd-05ee6f0d235a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6h7t4\" (UID: \"1496fda2-941d-48a9-8bdd-05ee6f0d235a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214633 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-csi-data-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214643 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpp2x\" (UniqueName: \"kubernetes.io/projected/1496fda2-941d-48a9-8bdd-05ee6f0d235a-kube-api-access-qpp2x\") pod \"machine-config-controller-84d6567774-6h7t4\" (UID: \"1496fda2-941d-48a9-8bdd-05ee6f0d235a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214713 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-client-ca\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214751 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8698eec3-b444-4838-bfff-36fb054e8578-webhook-cert\") pod \"packageserver-d55dfcdfc-pg8m4\" (UID: \"8698eec3-b444-4838-bfff-36fb054e8578\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214770 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a491b22c-d857-469a-830d-791d53b4ccad-profile-collector-cert\") pod \"olm-operator-6b444d44fb-57sqf\" (UID: \"a491b22c-d857-469a-830d-791d53b4ccad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214788 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/30b720f3-fda0-41f1-bca9-e52fe84a3535-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6qzzx\" (UID: \"30b720f3-fda0-41f1-bca9-e52fe84a3535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214813 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-etcd-serving-ca\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214827 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8698eec3-b444-4838-bfff-36fb054e8578-apiservice-cert\") pod \"packageserver-d55dfcdfc-pg8m4\" (UID: \"8698eec3-b444-4838-bfff-36fb054e8578\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214843 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/84338b9c-fbd7-4987-95d7-21a4d09e2b05-srv-cert\") pod \"catalog-operator-68c6474976-q87hn\" (UID: \"84338b9c-fbd7-4987-95d7-21a4d09e2b05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214862 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62bc1d94-f1c8-4c29-ab4f-becc5775876a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8lwx\" (UID: \"62bc1d94-f1c8-4c29-ab4f-becc5775876a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214877 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7xcq\" (UniqueName: \"kubernetes.io/projected/84338b9c-fbd7-4987-95d7-21a4d09e2b05-kube-api-access-k7xcq\") pod \"catalog-operator-68c6474976-q87hn\" (UID: \"84338b9c-fbd7-4987-95d7-21a4d09e2b05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214898 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bcmg\" (UniqueName: \"kubernetes.io/projected/3cff9f42-aeae-4c76-a542-75cc5c37254a-kube-api-access-6bcmg\") pod \"marketplace-operator-79b997595-r7fm5\" (UID: \"3cff9f42-aeae-4c76-a542-75cc5c37254a\") " pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214928 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62bc1d94-f1c8-4c29-ab4f-becc5775876a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8lwx\" (UID: \"62bc1d94-f1c8-4c29-ab4f-becc5775876a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214955 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/63b7fc38-b496-4673-9912-0b7c1018962b-proxy-tls\") pod \"machine-config-operator-74547568cd-24p7p\" (UID: \"63b7fc38-b496-4673-9912-0b7c1018962b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214976 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrxvc\" (UniqueName: \"kubernetes.io/projected/5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121-kube-api-access-lrxvc\") pod \"machine-config-server-n2bkb\" (UID: \"5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121\") " pod="openshift-machine-config-operator/machine-config-server-n2bkb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.214993 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddb8168-a234-4a98-9feb-3301169affe9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9blhj\" (UID: \"4ddb8168-a234-4a98-9feb-3301169affe9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215010 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-audit\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215031 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff00ff81-a3dd-477f-98c7-a99d0d462f57-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bdws4\" (UID: \"ff00ff81-a3dd-477f-98c7-a99d0d462f57\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215046 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a491b22c-d857-469a-830d-791d53b4ccad-srv-cert\") pod \"olm-operator-6b444d44fb-57sqf\" (UID: \"a491b22c-d857-469a-830d-791d53b4ccad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215064 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed12b20a-27bb-4560-a46e-68302e06f373-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n5vz6\" (UID: \"ed12b20a-27bb-4560-a46e-68302e06f373\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215081 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpcrj\" (UniqueName: \"kubernetes.io/projected/4bc8243b-25c6-4117-b4eb-470f5ba127e3-kube-api-access-xpcrj\") pod \"dns-default-94drr\" (UID: \"4bc8243b-25c6-4117-b4eb-470f5ba127e3\") " pod="openshift-dns/dns-default-94drr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215099 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/15b52cf1-2c86-4747-9be1-a690a5a125ca-signing-key\") pod \"service-ca-9c57cc56f-tp2pv\" (UID: \"15b52cf1-2c86-4747-9be1-a690a5a125ca\") " pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215119 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/63b7fc38-b496-4673-9912-0b7c1018962b-images\") pod \"machine-config-operator-74547568cd-24p7p\" (UID: \"63b7fc38-b496-4673-9912-0b7c1018962b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215133 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bc8243b-25c6-4117-b4eb-470f5ba127e3-config-volume\") pod \"dns-default-94drr\" (UID: \"4bc8243b-25c6-4117-b4eb-470f5ba127e3\") " pod="openshift-dns/dns-default-94drr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215148 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1609f0af-9131-40b0-8723-ed972292effb-config\") pod \"service-ca-operator-777779d784-22nrg\" (UID: \"1609f0af-9131-40b0-8723-ed972292effb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215164 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zgbt\" (UniqueName: \"kubernetes.io/projected/ff00ff81-a3dd-477f-98c7-a99d0d462f57-kube-api-access-8zgbt\") pod \"kube-storage-version-migrator-operator-b67b599dd-bdws4\" (UID: \"ff00ff81-a3dd-477f-98c7-a99d0d462f57\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215181 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-config\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215201 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjfft\" (UniqueName: \"kubernetes.io/projected/dc337bf7-2539-47f2-a100-a0e47b747abc-kube-api-access-cjfft\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215220 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1496fda2-941d-48a9-8bdd-05ee6f0d235a-proxy-tls\") pod \"machine-config-controller-84d6567774-6h7t4\" (UID: \"1496fda2-941d-48a9-8bdd-05ee6f0d235a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215235 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbed104c-291d-45f5-b41d-99814829422e-secret-volume\") pod \"collect-profiles-29531190-g7gns\" (UID: \"dbed104c-291d-45f5-b41d-99814829422e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215266 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc337bf7-2539-47f2-a100-a0e47b747abc-node-pullsecrets\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215299 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qpd8\" (UniqueName: \"kubernetes.io/projected/446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61-kube-api-access-6qpd8\") pod \"multus-admission-controller-857f4d67dd-6ph4l\" (UID: \"446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ph4l" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215323 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-socket-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215338 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dc337bf7-2539-47f2-a100-a0e47b747abc-audit-dir\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215354 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/794d73ce-3e95-4492-a64d-4ef84a11d014-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b8c6c\" (UID: \"794d73ce-3e95-4492-a64d-4ef84a11d014\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215370 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbed104c-291d-45f5-b41d-99814829422e-config-volume\") pod \"collect-profiles-29531190-g7gns\" (UID: \"dbed104c-291d-45f5-b41d-99814829422e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215387 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794d73ce-3e95-4492-a64d-4ef84a11d014-config\") pod \"kube-controller-manager-operator-78b949d7b-b8c6c\" (UID: \"794d73ce-3e95-4492-a64d-4ef84a11d014\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215403 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215418 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed12b20a-27bb-4560-a46e-68302e06f373-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n5vz6\" (UID: \"ed12b20a-27bb-4560-a46e-68302e06f373\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215434 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-plugins-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215452 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3cff9f42-aeae-4c76-a542-75cc5c37254a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r7fm5\" (UID: \"3cff9f42-aeae-4c76-a542-75cc5c37254a\") " pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215470 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r8tm\" (UniqueName: \"kubernetes.io/projected/a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc-kube-api-access-6r8tm\") pod \"ingress-canary-qv5qk\" (UID: \"a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc\") " pod="openshift-ingress-canary/ingress-canary-qv5qk" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215489 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tprc\" (UniqueName: \"kubernetes.io/projected/63b7fc38-b496-4673-9912-0b7c1018962b-kube-api-access-2tprc\") pod \"machine-config-operator-74547568cd-24p7p\" (UID: \"63b7fc38-b496-4673-9912-0b7c1018962b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215507 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x9tq\" (UniqueName: \"kubernetes.io/projected/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-kube-api-access-8x9tq\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215528 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3cff9f42-aeae-4c76-a542-75cc5c37254a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r7fm5\" (UID: \"3cff9f42-aeae-4c76-a542-75cc5c37254a\") " pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215545 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kcjj\" (UniqueName: \"kubernetes.io/projected/dbed104c-291d-45f5-b41d-99814829422e-kube-api-access-5kcjj\") pod \"collect-profiles-29531190-g7gns\" (UID: \"dbed104c-291d-45f5-b41d-99814829422e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215563 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1609f0af-9131-40b0-8723-ed972292effb-serving-cert\") pod \"service-ca-operator-777779d784-22nrg\" (UID: \"1609f0af-9131-40b0-8723-ed972292effb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215580 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mj59\" (UniqueName: \"kubernetes.io/projected/1609f0af-9131-40b0-8723-ed972292effb-kube-api-access-5mj59\") pod \"service-ca-operator-777779d784-22nrg\" (UID: \"1609f0af-9131-40b0-8723-ed972292effb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215616 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qx7x\" (UniqueName: \"kubernetes.io/projected/8698eec3-b444-4838-bfff-36fb054e8578-kube-api-access-4qx7x\") pod \"packageserver-d55dfcdfc-pg8m4\" (UID: \"8698eec3-b444-4838-bfff-36fb054e8578\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215632 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xb6p\" (UniqueName: \"kubernetes.io/projected/a491b22c-d857-469a-830d-791d53b4ccad-kube-api-access-7xb6p\") pod \"olm-operator-6b444d44fb-57sqf\" (UID: \"a491b22c-d857-469a-830d-791d53b4ccad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215653 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/63b7fc38-b496-4673-9912-0b7c1018962b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-24p7p\" (UID: \"63b7fc38-b496-4673-9912-0b7c1018962b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215671 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-serving-cert\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215687 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkl5x\" (UniqueName: \"kubernetes.io/projected/15b52cf1-2c86-4747-9be1-a690a5a125ca-kube-api-access-wkl5x\") pod \"service-ca-9c57cc56f-tp2pv\" (UID: \"15b52cf1-2c86-4747-9be1-a690a5a125ca\") " pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215703 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8698eec3-b444-4838-bfff-36fb054e8578-tmpfs\") pod \"packageserver-d55dfcdfc-pg8m4\" (UID: \"8698eec3-b444-4838-bfff-36fb054e8578\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215717 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j26j\" (UniqueName: \"kubernetes.io/projected/056782af-3e2e-4c24-a9c6-28c7acf1834b-kube-api-access-6j26j\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215734 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62bc1d94-f1c8-4c29-ab4f-becc5775876a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8lwx\" (UID: \"62bc1d94-f1c8-4c29-ab4f-becc5775876a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215760 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/15b52cf1-2c86-4747-9be1-a690a5a125ca-signing-cabundle\") pod \"service-ca-9c57cc56f-tp2pv\" (UID: \"15b52cf1-2c86-4747-9be1-a690a5a125ca\") " pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215774 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bc8243b-25c6-4117-b4eb-470f5ba127e3-metrics-tls\") pod \"dns-default-94drr\" (UID: \"4bc8243b-25c6-4117-b4eb-470f5ba127e3\") " pod="openshift-dns/dns-default-94drr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215791 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc-cert\") pod \"ingress-canary-qv5qk\" (UID: \"a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc\") " pod="openshift-ingress-canary/ingress-canary-qv5qk" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215826 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6ph4l\" (UID: \"446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ph4l" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215830 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-config\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215843 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-mountpoint-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215881 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/84338b9c-fbd7-4987-95d7-21a4d09e2b05-profile-collector-cert\") pod \"catalog-operator-68c6474976-q87hn\" (UID: \"84338b9c-fbd7-4987-95d7-21a4d09e2b05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215907 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed12b20a-27bb-4560-a46e-68302e06f373-config\") pod \"kube-apiserver-operator-766d6c64bb-n5vz6\" (UID: \"ed12b20a-27bb-4560-a46e-68302e06f373\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215936 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc337bf7-2539-47f2-a100-a0e47b747abc-serving-cert\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215958 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpnhv\" (UniqueName: \"kubernetes.io/projected/4ddb8168-a234-4a98-9feb-3301169affe9-kube-api-access-lpnhv\") pod \"package-server-manager-789f6589d5-9blhj\" (UID: \"4ddb8168-a234-4a98-9feb-3301169affe9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215976 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121-certs\") pod \"machine-config-server-n2bkb\" (UID: \"5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121\") " pod="openshift-machine-config-operator/machine-config-server-n2bkb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215996 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmzv5\" (UniqueName: \"kubernetes.io/projected/30b720f3-fda0-41f1-bca9-e52fe84a3535-kube-api-access-bmzv5\") pod \"control-plane-machine-set-operator-78cbb6b69f-6qzzx\" (UID: \"30b720f3-fda0-41f1-bca9-e52fe84a3535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.216769 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-client-ca\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.216845 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-socket-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.217271 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dc337bf7-2539-47f2-a100-a0e47b747abc-audit-dir\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.215883 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-mountpoint-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.218093 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-image-import-ca\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.218545 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/794d73ce-3e95-4492-a64d-4ef84a11d014-config\") pod \"kube-controller-manager-operator-78b949d7b-b8c6c\" (UID: \"794d73ce-3e95-4492-a64d-4ef84a11d014\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" Feb 23 18:36:21 crc kubenswrapper[4768]: E0223 18:36:21.218629 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:21.718594942 +0000 UTC m=+177.109080742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.219200 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/63b7fc38-b496-4673-9912-0b7c1018962b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-24p7p\" (UID: \"63b7fc38-b496-4673-9912-0b7c1018962b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.219322 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed12b20a-27bb-4560-a46e-68302e06f373-config\") pod \"kube-apiserver-operator-766d6c64bb-n5vz6\" (UID: \"ed12b20a-27bb-4560-a46e-68302e06f373\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.221863 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/63b7fc38-b496-4673-9912-0b7c1018962b-images\") pod \"machine-config-operator-74547568cd-24p7p\" (UID: \"63b7fc38-b496-4673-9912-0b7c1018962b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.221933 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.222095 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-registration-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.227813 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbed104c-291d-45f5-b41d-99814829422e-config-volume\") pod \"collect-profiles-29531190-g7gns\" (UID: \"dbed104c-291d-45f5-b41d-99814829422e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.223200 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/056782af-3e2e-4c24-a9c6-28c7acf1834b-plugins-dir\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.223724 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4bc8243b-25c6-4117-b4eb-470f5ba127e3-config-volume\") pod \"dns-default-94drr\" (UID: \"4bc8243b-25c6-4117-b4eb-470f5ba127e3\") " pod="openshift-dns/dns-default-94drr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.223932 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1609f0af-9131-40b0-8723-ed972292effb-config\") pod \"service-ca-operator-777779d784-22nrg\" (UID: \"1609f0af-9131-40b0-8723-ed972292effb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.224911 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3cff9f42-aeae-4c76-a542-75cc5c37254a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-r7fm5\" (UID: \"3cff9f42-aeae-4c76-a542-75cc5c37254a\") " pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.227945 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-audit\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.225329 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-config\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.225437 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1496fda2-941d-48a9-8bdd-05ee6f0d235a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6h7t4\" (UID: \"1496fda2-941d-48a9-8bdd-05ee6f0d235a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.225773 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/15b52cf1-2c86-4747-9be1-a690a5a125ca-signing-cabundle\") pod \"service-ca-9c57cc56f-tp2pv\" (UID: \"15b52cf1-2c86-4747-9be1-a690a5a125ca\") " pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.226093 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-etcd-serving-ca\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.226211 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dc337bf7-2539-47f2-a100-a0e47b747abc-node-pullsecrets\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.227341 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62bc1d94-f1c8-4c29-ab4f-becc5775876a-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8lwx\" (UID: \"62bc1d94-f1c8-4c29-ab4f-becc5775876a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.225101 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc337bf7-2539-47f2-a100-a0e47b747abc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.222325 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8698eec3-b444-4838-bfff-36fb054e8578-tmpfs\") pod \"packageserver-d55dfcdfc-pg8m4\" (UID: \"8698eec3-b444-4838-bfff-36fb054e8578\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.228650 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff00ff81-a3dd-477f-98c7-a99d0d462f57-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bdws4\" (UID: \"ff00ff81-a3dd-477f-98c7-a99d0d462f57\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.228764 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff00ff81-a3dd-477f-98c7-a99d0d462f57-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bdws4\" (UID: \"ff00ff81-a3dd-477f-98c7-a99d0d462f57\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.229090 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121-certs\") pod \"machine-config-server-n2bkb\" (UID: \"5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121\") " pod="openshift-machine-config-operator/machine-config-server-n2bkb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.230795 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/30b720f3-fda0-41f1-bca9-e52fe84a3535-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6qzzx\" (UID: \"30b720f3-fda0-41f1-bca9-e52fe84a3535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.230871 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1496fda2-941d-48a9-8bdd-05ee6f0d235a-proxy-tls\") pod \"machine-config-controller-84d6567774-6h7t4\" (UID: \"1496fda2-941d-48a9-8bdd-05ee6f0d235a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.231094 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a491b22c-d857-469a-830d-791d53b4ccad-profile-collector-cert\") pod \"olm-operator-6b444d44fb-57sqf\" (UID: \"a491b22c-d857-469a-830d-791d53b4ccad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.232283 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/84338b9c-fbd7-4987-95d7-21a4d09e2b05-profile-collector-cert\") pod \"catalog-operator-68c6474976-q87hn\" (UID: \"84338b9c-fbd7-4987-95d7-21a4d09e2b05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.234233 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/84338b9c-fbd7-4987-95d7-21a4d09e2b05-srv-cert\") pod \"catalog-operator-68c6474976-q87hn\" (UID: \"84338b9c-fbd7-4987-95d7-21a4d09e2b05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.234435 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-serving-cert\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.234633 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62bc1d94-f1c8-4c29-ab4f-becc5775876a-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8lwx\" (UID: \"62bc1d94-f1c8-4c29-ab4f-becc5775876a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.234748 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1609f0af-9131-40b0-8723-ed972292effb-serving-cert\") pod \"service-ca-operator-777779d784-22nrg\" (UID: \"1609f0af-9131-40b0-8723-ed972292effb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.234851 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/15b52cf1-2c86-4747-9be1-a690a5a125ca-signing-key\") pod \"service-ca-9c57cc56f-tp2pv\" (UID: \"15b52cf1-2c86-4747-9be1-a690a5a125ca\") " pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.234915 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbed104c-291d-45f5-b41d-99814829422e-secret-volume\") pod \"collect-profiles-29531190-g7gns\" (UID: \"dbed104c-291d-45f5-b41d-99814829422e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.234994 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bc8243b-25c6-4117-b4eb-470f5ba127e3-metrics-tls\") pod \"dns-default-94drr\" (UID: \"4bc8243b-25c6-4117-b4eb-470f5ba127e3\") " pod="openshift-dns/dns-default-94drr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.235034 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8698eec3-b444-4838-bfff-36fb054e8578-webhook-cert\") pod \"packageserver-d55dfcdfc-pg8m4\" (UID: \"8698eec3-b444-4838-bfff-36fb054e8578\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.235379 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddb8168-a234-4a98-9feb-3301169affe9-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9blhj\" (UID: \"4ddb8168-a234-4a98-9feb-3301169affe9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.235555 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dc337bf7-2539-47f2-a100-a0e47b747abc-etcd-client\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.235792 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a491b22c-d857-469a-830d-791d53b4ccad-srv-cert\") pod \"olm-operator-6b444d44fb-57sqf\" (UID: \"a491b22c-d857-469a-830d-791d53b4ccad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.235870 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dc337bf7-2539-47f2-a100-a0e47b747abc-encryption-config\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.236082 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121-node-bootstrap-token\") pod \"machine-config-server-n2bkb\" (UID: \"5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121\") " pod="openshift-machine-config-operator/machine-config-server-n2bkb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.239320 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8698eec3-b444-4838-bfff-36fb054e8578-apiservice-cert\") pod \"packageserver-d55dfcdfc-pg8m4\" (UID: \"8698eec3-b444-4838-bfff-36fb054e8578\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.240944 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6ph4l\" (UID: \"446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ph4l" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.244769 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/63b7fc38-b496-4673-9912-0b7c1018962b-proxy-tls\") pod \"machine-config-operator-74547568cd-24p7p\" (UID: \"63b7fc38-b496-4673-9912-0b7c1018962b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.245819 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed12b20a-27bb-4560-a46e-68302e06f373-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-n5vz6\" (UID: \"ed12b20a-27bb-4560-a46e-68302e06f373\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.246388 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc337bf7-2539-47f2-a100-a0e47b747abc-serving-cert\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.247742 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/794d73ce-3e95-4492-a64d-4ef84a11d014-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-b8c6c\" (UID: \"794d73ce-3e95-4492-a64d-4ef84a11d014\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.247792 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3cff9f42-aeae-4c76-a542-75cc5c37254a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-r7fm5\" (UID: \"3cff9f42-aeae-4c76-a542-75cc5c37254a\") " pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.248240 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc-cert\") pod \"ingress-canary-qv5qk\" (UID: \"a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc\") " pod="openshift-ingress-canary/ingress-canary-qv5qk" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.253431 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2q4n2"] Feb 23 18:36:21 crc kubenswrapper[4768]: W0223 18:36:21.266780 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cc7ad33_9f56_4f86_bf59_5cd21b4fc3de.slice/crio-e5d37ec0cd75be62db974f05ce94cefc1728757ecaa00f9809ac3ac5db3ffab4 WatchSource:0}: Error finding container e5d37ec0cd75be62db974f05ce94cefc1728757ecaa00f9809ac3ac5db3ffab4: Status 404 returned error can't find the container with id e5d37ec0cd75be62db974f05ce94cefc1728757ecaa00f9809ac3ac5db3ffab4 Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.267866 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpp2x\" (UniqueName: \"kubernetes.io/projected/1496fda2-941d-48a9-8bdd-05ee6f0d235a-kube-api-access-qpp2x\") pod \"machine-config-controller-84d6567774-6h7t4\" (UID: \"1496fda2-941d-48a9-8bdd-05ee6f0d235a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.287653 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmzv5\" (UniqueName: \"kubernetes.io/projected/30b720f3-fda0-41f1-bca9-e52fe84a3535-kube-api-access-bmzv5\") pod \"control-plane-machine-set-operator-78cbb6b69f-6qzzx\" (UID: \"30b720f3-fda0-41f1-bca9-e52fe84a3535\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.289613 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx" Feb 23 18:36:21 crc kubenswrapper[4768]: W0223 18:36:21.301425 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4efae636_8579_4696_b7a7_91c925fdca48.slice/crio-69e6ea98063041056a3db962d061cc851c7361997d7aa25dbc729b81ec39c687 WatchSource:0}: Error finding container 69e6ea98063041056a3db962d061cc851c7361997d7aa25dbc729b81ec39c687: Status 404 returned error can't find the container with id 69e6ea98063041056a3db962d061cc851c7361997d7aa25dbc729b81ec39c687 Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.304412 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/794d73ce-3e95-4492-a64d-4ef84a11d014-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-b8c6c\" (UID: \"794d73ce-3e95-4492-a64d-4ef84a11d014\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.317793 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: E0223 18:36:21.318348 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:21.818324825 +0000 UTC m=+177.208810625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.334755 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mz6w6" event={"ID":"26f1fca3-79fa-4717-8b2b-dbdad99057cc","Type":"ContainerStarted","Data":"e773fffe1d76de8787fcf3c397e0a2a4c0cfa2bb2a8f87d3681d51d92c961d73"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.334807 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mz6w6" event={"ID":"26f1fca3-79fa-4717-8b2b-dbdad99057cc","Type":"ContainerStarted","Data":"dc88e79aa102cdcd1dd7a10ddc3902e085f2b98605c7e0f22eabba0f007b23a2"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.335381 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mz6w6" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.336038 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" event={"ID":"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8","Type":"ContainerStarted","Data":"51f2dae56c1d52b7eabd6ff540c623cff1a19b2f79073e060bc0b7c435ac0533"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.337981 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v9856" event={"ID":"c269d15e-90d0-47d8-b2bd-f5785fa1a69b","Type":"ContainerStarted","Data":"be51b2fcf4f2dccb28b14e4b0aa8e73f50539866b1cc0a76753f64d0fb41c086"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.339601 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-mz6w6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.339644 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mz6w6" podUID="26f1fca3-79fa-4717-8b2b-dbdad99057cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.340048 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" event={"ID":"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de","Type":"ContainerStarted","Data":"e5d37ec0cd75be62db974f05ce94cefc1728757ecaa00f9809ac3ac5db3ffab4"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.346435 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qx7x\" (UniqueName: \"kubernetes.io/projected/8698eec3-b444-4838-bfff-36fb054e8578-kube-api-access-4qx7x\") pod \"packageserver-d55dfcdfc-pg8m4\" (UID: \"8698eec3-b444-4838-bfff-36fb054e8578\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.350638 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" event={"ID":"3af14597-4b62-431a-939a-2e7c3592a896","Type":"ContainerStarted","Data":"1eb97ff2c6b8620ed010f8f62da74e377897c50d9253184c58ab9f4b114d7f71"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.354962 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" event={"ID":"31ebc831-fd3d-4dfa-8b67-a0fa553b3472","Type":"ContainerStarted","Data":"a84a0edd9752ac55ab597f6cc88af5ff05fa88c99f8fb4577953434be7732e3d"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.359288 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xb6p\" (UniqueName: \"kubernetes.io/projected/a491b22c-d857-469a-830d-791d53b4ccad-kube-api-access-7xb6p\") pod \"olm-operator-6b444d44fb-57sqf\" (UID: \"a491b22c-d857-469a-830d-791d53b4ccad\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.366096 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" event={"ID":"78cada77-daaa-4a63-acf4-12499986ea25","Type":"ContainerStarted","Data":"fa12e407ff4caedccb5672809915e9afc0c0c4f25a903df88f80b5d47d859f9d"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.366141 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" event={"ID":"78cada77-daaa-4a63-acf4-12499986ea25","Type":"ContainerStarted","Data":"f79e926e1f56a1d86a7c64ded10942d28b827aac5f7e9ded1e41905e5c26ca04"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.369302 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.380331 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m"] Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.386712 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ed12b20a-27bb-4560-a46e-68302e06f373-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-n5vz6\" (UID: \"ed12b20a-27bb-4560-a46e-68302e06f373\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.386762 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpcrj\" (UniqueName: \"kubernetes.io/projected/4bc8243b-25c6-4117-b4eb-470f5ba127e3-kube-api-access-xpcrj\") pod \"dns-default-94drr\" (UID: \"4bc8243b-25c6-4117-b4eb-470f5ba127e3\") " pod="openshift-dns/dns-default-94drr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.389850 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" event={"ID":"cd8a7de4-dae6-4dfa-afbf-656370147b87","Type":"ContainerStarted","Data":"18b3c7da595e3856599531f673ab0847b9bc8b03f41da7b782fdb095ede780cb"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.389914 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" event={"ID":"cd8a7de4-dae6-4dfa-afbf-656370147b87","Type":"ContainerStarted","Data":"50f05d250e4276b61baf90ffd9a9d51cf3f22356f0d9ac1cddeddf5875724c87"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.389928 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" event={"ID":"cd8a7de4-dae6-4dfa-afbf-656370147b87","Type":"ContainerStarted","Data":"febd325839f82ad3a844f5b5ff6ab01c94d4a19da5641b6ce97fb24991a07fc9"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.393604 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" event={"ID":"d1dce2ce-c431-4bc4-9f67-043f68609576","Type":"ContainerStarted","Data":"2e95e011c169f2e0107b96faed3ebeb7c3f4dbcf60b78b23069ef41e3f069870"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.398282 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4hmsw" event={"ID":"2d6107cc-bc15-45b5-807a-a41c4ecefca6","Type":"ContainerStarted","Data":"84406816c5f77200a895ea2a7192b83143dfd5661d330b68f1484ed1c9f648bb"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.398315 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4hmsw" event={"ID":"2d6107cc-bc15-45b5-807a-a41c4ecefca6","Type":"ContainerStarted","Data":"c85f37cfcb863c4e8d2b69064a2eb3c1ea3df910125d1ed368d36512f517f281"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.398570 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.399791 4768 patch_prober.go:28] interesting pod/console-operator-58897d9998-4hmsw container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.399832 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4hmsw" podUID="2d6107cc-bc15-45b5-807a-a41c4ecefca6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.403518 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm" event={"ID":"be973663-1808-4081-8531-a5a03e55eafb","Type":"ContainerStarted","Data":"cd8a75c13fa65408f308a24152412fd8cc1a2311b91cd628ed38011535b75b7f"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.403557 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm" event={"ID":"be973663-1808-4081-8531-a5a03e55eafb","Type":"ContainerStarted","Data":"57b1e402f0ca80e3b8156f0994bd0d2c86fc628d175f2c12d52fe670f9124a33"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.403593 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm" event={"ID":"be973663-1808-4081-8531-a5a03e55eafb","Type":"ContainerStarted","Data":"50145700a7ac83aa83c58b44c6e032ff4cbf29a5caef8e3fd402310f543af1c2"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.406821 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2q4n2" event={"ID":"4efae636-8579-4696-b7a7-91c925fdca48","Type":"ContainerStarted","Data":"69e6ea98063041056a3db962d061cc851c7361997d7aa25dbc729b81ec39c687"} Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.420615 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:21 crc kubenswrapper[4768]: E0223 18:36:21.420880 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:21.920854043 +0000 UTC m=+177.311339843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.430711 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-94drr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.448054 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpnhv\" (UniqueName: \"kubernetes.io/projected/4ddb8168-a234-4a98-9feb-3301169affe9-kube-api-access-lpnhv\") pod \"package-server-manager-789f6589d5-9blhj\" (UID: \"4ddb8168-a234-4a98-9feb-3301169affe9\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.459571 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkl5x\" (UniqueName: \"kubernetes.io/projected/15b52cf1-2c86-4747-9be1-a690a5a125ca-kube-api-access-wkl5x\") pod \"service-ca-9c57cc56f-tp2pv\" (UID: \"15b52cf1-2c86-4747-9be1-a690a5a125ca\") " pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.460624 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j26j\" (UniqueName: \"kubernetes.io/projected/056782af-3e2e-4c24-a9c6-28c7acf1834b-kube-api-access-6j26j\") pod \"csi-hostpathplugin-tllsr\" (UID: \"056782af-3e2e-4c24-a9c6-28c7acf1834b\") " pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.479848 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/62bc1d94-f1c8-4c29-ab4f-becc5775876a-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-x8lwx\" (UID: \"62bc1d94-f1c8-4c29-ab4f-becc5775876a\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.490901 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zgbt\" (UniqueName: \"kubernetes.io/projected/ff00ff81-a3dd-477f-98c7-a99d0d462f57-kube-api-access-8zgbt\") pod \"kube-storage-version-migrator-operator-b67b599dd-bdws4\" (UID: \"ff00ff81-a3dd-477f-98c7-a99d0d462f57\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.508107 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r8tm\" (UniqueName: \"kubernetes.io/projected/a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc-kube-api-access-6r8tm\") pod \"ingress-canary-qv5qk\" (UID: \"a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc\") " pod="openshift-ingress-canary/ingress-canary-qv5qk" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.526695 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.527708 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: E0223 18:36:21.528019 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:22.028008869 +0000 UTC m=+177.418494669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.528789 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tprc\" (UniqueName: \"kubernetes.io/projected/63b7fc38-b496-4673-9912-0b7c1018962b-kube-api-access-2tprc\") pod \"machine-config-operator-74547568cd-24p7p\" (UID: \"63b7fc38-b496-4673-9912-0b7c1018962b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.556516 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kcjj\" (UniqueName: \"kubernetes.io/projected/dbed104c-291d-45f5-b41d-99814829422e-kube-api-access-5kcjj\") pod \"collect-profiles-29531190-g7gns\" (UID: \"dbed104c-291d-45f5-b41d-99814829422e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.568998 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x9tq\" (UniqueName: \"kubernetes.io/projected/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-kube-api-access-8x9tq\") pod \"controller-manager-879f6c89f-2fhkt\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.576104 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.584754 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.587061 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mj59\" (UniqueName: \"kubernetes.io/projected/1609f0af-9131-40b0-8723-ed972292effb-kube-api-access-5mj59\") pod \"service-ca-operator-777779d784-22nrg\" (UID: \"1609f0af-9131-40b0-8723-ed972292effb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.604787 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.605565 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bcmg\" (UniqueName: \"kubernetes.io/projected/3cff9f42-aeae-4c76-a542-75cc5c37254a-kube-api-access-6bcmg\") pod \"marketplace-operator-79b997595-r7fm5\" (UID: \"3cff9f42-aeae-4c76-a542-75cc5c37254a\") " pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.613825 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.617706 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n"] Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.627034 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrxvc\" (UniqueName: \"kubernetes.io/projected/5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121-kube-api-access-lrxvc\") pod \"machine-config-server-n2bkb\" (UID: \"5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121\") " pod="openshift-machine-config-operator/machine-config-server-n2bkb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.629003 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:21 crc kubenswrapper[4768]: E0223 18:36:21.629306 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:22.129288934 +0000 UTC m=+177.519774734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.631309 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.639105 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.650673 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.652338 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qpd8\" (UniqueName: \"kubernetes.io/projected/446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61-kube-api-access-6qpd8\") pod \"multus-admission-controller-857f4d67dd-6ph4l\" (UID: \"446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ph4l" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.655833 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.670299 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjfft\" (UniqueName: \"kubernetes.io/projected/dc337bf7-2539-47f2-a100-a0e47b747abc-kube-api-access-cjfft\") pod \"apiserver-76f77b778f-x58kc\" (UID: \"dc337bf7-2539-47f2-a100-a0e47b747abc\") " pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.682079 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.689326 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.690801 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7xcq\" (UniqueName: \"kubernetes.io/projected/84338b9c-fbd7-4987-95d7-21a4d09e2b05-kube-api-access-k7xcq\") pod \"catalog-operator-68c6474976-q87hn\" (UID: \"84338b9c-fbd7-4987-95d7-21a4d09e2b05\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.698526 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qv5qk" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.714508 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tllsr" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.730351 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-n2bkb" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.734469 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: E0223 18:36:21.734790 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:22.234777624 +0000 UTC m=+177.625263414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.744009 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg"] Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.814632 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.838176 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:21 crc kubenswrapper[4768]: E0223 18:36:21.838685 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:22.3386666 +0000 UTC m=+177.729152400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.843981 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx"] Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.862696 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.868865 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ph4l" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.882328 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-94drr"] Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.897776 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.926958 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.939736 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:21 crc kubenswrapper[4768]: E0223 18:36:21.940152 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:22.44013474 +0000 UTC m=+177.830620550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.979810 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46"] Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.982754 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c"] Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.984636 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-ggqts"] Feb 23 18:36:21 crc kubenswrapper[4768]: I0223 18:36:21.995902 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-zh7hl"] Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.040624 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:22 crc kubenswrapper[4768]: E0223 18:36:22.041010 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:22.540962943 +0000 UTC m=+177.931448743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.041449 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:22 crc kubenswrapper[4768]: E0223 18:36:22.041841 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:22.541832636 +0000 UTC m=+177.932318426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.142079 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:22 crc kubenswrapper[4768]: E0223 18:36:22.142692 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:22.642667119 +0000 UTC m=+178.033152919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.223026 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p"] Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.246664 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:22 crc kubenswrapper[4768]: E0223 18:36:22.247282 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:22.747262714 +0000 UTC m=+178.137748514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.348234 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:22 crc kubenswrapper[4768]: E0223 18:36:22.348810 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:22.848791735 +0000 UTC m=+178.239277535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.416103 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25ssm" podStartSLOduration=117.416088685 podStartE2EDuration="1m57.416088685s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:22.414871752 +0000 UTC m=+177.805357552" watchObservedRunningTime="2026-02-23 18:36:22.416088685 +0000 UTC m=+177.806574485" Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.444382 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-n2bkb" event={"ID":"5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121","Type":"ContainerStarted","Data":"e4da4b4d7a49aec7091cf4e56ff80e8f40dbf631ebaa8a4b9365f2a10c13e5d0"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.446203 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" event={"ID":"fdee7aa6-3507-4d5c-8039-646b66ece997","Type":"ContainerStarted","Data":"097a026d9ca478a69fb0e1d9e94f2fb65d1ccde268c13e0fa5cde77bb128d56a"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.449590 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:22 crc kubenswrapper[4768]: E0223 18:36:22.449973 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:22.949951437 +0000 UTC m=+178.340437237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.456090 4768 generic.go:334] "Generic (PLEG): container finished" podID="78cada77-daaa-4a63-acf4-12499986ea25" containerID="fa12e407ff4caedccb5672809915e9afc0c0c4f25a903df88f80b5d47d859f9d" exitCode=0 Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.456554 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" event={"ID":"78cada77-daaa-4a63-acf4-12499986ea25","Type":"ContainerDied","Data":"fa12e407ff4caedccb5672809915e9afc0c0c4f25a903df88f80b5d47d859f9d"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.460917 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" event={"ID":"776d3448-3eb2-4f0f-b534-3a0b1df1ebe8","Type":"ContainerStarted","Data":"011e018da5bf12db1fe43b937765c05e5d90dd0192b1ecac7382ca1963366475"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.464563 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" event={"ID":"d1dce2ce-c431-4bc4-9f67-043f68609576","Type":"ContainerStarted","Data":"d73786d93d21afd42734dd207434342aa47123fec3945f759cba0dbc56775524"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.474507 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" event={"ID":"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de","Type":"ContainerStarted","Data":"9b6492814819dddc921cc7330066d5255e5c5877a1ac61ce10cefd061bea145f"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.477326 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx" event={"ID":"30b720f3-fda0-41f1-bca9-e52fe84a3535","Type":"ContainerStarted","Data":"02e7c3494e3f3eccbb91a6de4f4c630bea93744947ad03ddbe9803768471cf81"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.478655 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v9856" event={"ID":"c269d15e-90d0-47d8-b2bd-f5785fa1a69b","Type":"ContainerStarted","Data":"626a4af4b4b514646c35bb59aa93c9081dcc809fe60528ce1576d27da6161861"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.489919 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" event={"ID":"31ebc831-fd3d-4dfa-8b67-a0fa553b3472","Type":"ContainerStarted","Data":"5a5ee4235b6765417a0c8d597de16520413df5d54f8306114a9fe106765ab650"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.490526 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.491680 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" event={"ID":"63b7fc38-b496-4673-9912-0b7c1018962b","Type":"ContainerStarted","Data":"9a46e3d74452243669bc732b439018fec755cc0afc056ec0a04f45ee644b56c9"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.492712 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" event={"ID":"656ceec2-cb68-438c-a58f-5e64f3296cc6","Type":"ContainerStarted","Data":"6afd2f12eb8e59d27bc519b8baf3a6bd415e4515e0bdf4ae041051e2ab23248d"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.492736 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" event={"ID":"656ceec2-cb68-438c-a58f-5e64f3296cc6","Type":"ContainerStarted","Data":"81f5c9bb5f92684a22544a16b25a0082ee14ec90308643f52e1d6f54e02f258d"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.500197 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" event={"ID":"794d73ce-3e95-4492-a64d-4ef84a11d014","Type":"ContainerStarted","Data":"0d304fd0cff6d84a0dc76d6b28284f5cccc06907012be884479fd91866ae0100"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.500414 4768 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lsn69 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" start-of-body= Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.500517 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" podUID="31ebc831-fd3d-4dfa-8b67-a0fa553b3472" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.505149 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" event={"ID":"c36b0ac3-8286-4df2-87cc-afc1edd2a19b","Type":"ContainerStarted","Data":"d289fe43099e0b3e4bfcf6473a02999e32bca7f6a108e186c77d170f7cee6f50"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.505235 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.505292 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" event={"ID":"c36b0ac3-8286-4df2-87cc-afc1edd2a19b","Type":"ContainerStarted","Data":"5b2f5c89dd9614632067fca5a0fb1ba66e7734625567ca186c10b4acf6efdf0d"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.518395 4768 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-nrd6m container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.518451 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" podUID="c36b0ac3-8286-4df2-87cc-afc1edd2a19b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.534150 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-94drr" event={"ID":"4bc8243b-25c6-4117-b4eb-470f5ba127e3","Type":"ContainerStarted","Data":"81cace567ff8768bc7cb42e71ebff87732cb734f2b8c2cc691c234d55de5dfcc"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.540647 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ggqts" event={"ID":"2f9a97e9-9855-4a63-8d90-8ee30404ab5f","Type":"ContainerStarted","Data":"378a5db39d07832630a74ac9f3d4e720d4f823003a9fb0fc8de1af13d7a82f34"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.544515 4768 generic.go:334] "Generic (PLEG): container finished" podID="3af14597-4b62-431a-939a-2e7c3592a896" containerID="8ceeb516251a463f6530b85795683718c8f05a6e5b60689a0840a3c62f61040c" exitCode=0 Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.544621 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" event={"ID":"3af14597-4b62-431a-939a-2e7c3592a896","Type":"ContainerDied","Data":"8ceeb516251a463f6530b85795683718c8f05a6e5b60689a0840a3c62f61040c"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.549886 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" event={"ID":"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8","Type":"ContainerStarted","Data":"22fb33d4783e81a60a29353ffa26c0c28eebd49ca03af08c9fca7b45da1c5d65"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.552333 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:22 crc kubenswrapper[4768]: E0223 18:36:22.552521 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:23.052494536 +0000 UTC m=+178.442980336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.553308 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.554418 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" event={"ID":"cd763cbb-7bd3-4384-b30e-ee24938ed653","Type":"ContainerStarted","Data":"4edd34f991171ed8534ab07f1cefaf8c186db722bf85c432a3267291004bd0a2"} Feb 23 18:36:22 crc kubenswrapper[4768]: E0223 18:36:22.555015 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:23.055000625 +0000 UTC m=+178.445486415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.571994 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2q4n2" event={"ID":"4efae636-8579-4696-b7a7-91c925fdca48","Type":"ContainerStarted","Data":"87237d7535d44941286b8a0582810c818eba35f4642bfa68edea6dc2e7ad3c1c"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.611056 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mz6w6" podStartSLOduration=117.611039996 podStartE2EDuration="1m57.611039996s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:22.610394608 +0000 UTC m=+178.000880408" watchObservedRunningTime="2026-02-23 18:36:22.611039996 +0000 UTC m=+178.001525796" Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.619613 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nnn8b" event={"ID":"51c3071b-8dc3-402a-8f3e-a89fa71f4a54","Type":"ContainerStarted","Data":"b55f8a8f41e31ec2b8f86acc10f10076d9d809f0a2a295b8e8eb5422df69ab86"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.619678 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nnn8b" event={"ID":"51c3071b-8dc3-402a-8f3e-a89fa71f4a54","Type":"ContainerStarted","Data":"045d721029b02d11625d7ca611ba29aaec43157cb899ed600c775b353c47790e"} Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.638770 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-mz6w6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.638827 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mz6w6" podUID="26f1fca3-79fa-4717-8b2b-dbdad99057cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.651293 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4hmsw" Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.654101 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:22 crc kubenswrapper[4768]: E0223 18:36:22.669329 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:23.169295648 +0000 UTC m=+178.559781458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.690429 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-22nrg"] Feb 23 18:36:22 crc kubenswrapper[4768]: W0223 18:36:22.725366 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbed104c_291d_45f5_b41d_99814829422e.slice/crio-a0a9b40c3ddec23e2101d3aecbb79298195c2d4ca7330534df86ff57563493b1 WatchSource:0}: Error finding container a0a9b40c3ddec23e2101d3aecbb79298195c2d4ca7330534df86ff57563493b1: Status 404 returned error can't find the container with id a0a9b40c3ddec23e2101d3aecbb79298195c2d4ca7330534df86ff57563493b1 Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.728249 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns"] Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.749013 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-tp2pv"] Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.756236 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:22 crc kubenswrapper[4768]: E0223 18:36:22.758022 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:23.258009076 +0000 UTC m=+178.648494876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.839954 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4"] Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.868679 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:22 crc kubenswrapper[4768]: E0223 18:36:22.869030 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:23.369011598 +0000 UTC m=+178.759497398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.886253 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj"] Feb 23 18:36:22 crc kubenswrapper[4768]: W0223 18:36:22.910414 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ddb8168_a234_4a98_9feb_3301169affe9.slice/crio-811cb1d8f3aad6c06053bd94d42544e0ab41199accee17a409781536031f1c66 WatchSource:0}: Error finding container 811cb1d8f3aad6c06053bd94d42544e0ab41199accee17a409781536031f1c66: Status 404 returned error can't find the container with id 811cb1d8f3aad6c06053bd94d42544e0ab41199accee17a409781536031f1c66 Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.956384 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx"] Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.971361 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qv5qk"] Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.971957 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:22 crc kubenswrapper[4768]: E0223 18:36:22.972228 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:23.472217816 +0000 UTC m=+178.862703616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:22 crc kubenswrapper[4768]: I0223 18:36:22.972822 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6"] Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:22.995374 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4"] Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.027856 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf"] Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.035543 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4"] Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.071696 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r7fm5"] Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.074731 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:23 crc kubenswrapper[4768]: E0223 18:36:23.075150 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:23.575131846 +0000 UTC m=+178.965617646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.079127 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6ph4l"] Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.081507 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tllsr"] Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.108689 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2fhkt"] Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.119154 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-x58kc"] Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.122493 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn"] Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.127761 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-7ghtp" podStartSLOduration=118.127743702 podStartE2EDuration="1m58.127743702s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.126678523 +0000 UTC m=+178.517164313" watchObservedRunningTime="2026-02-23 18:36:23.127743702 +0000 UTC m=+178.518229502" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.159745 4768 csr.go:261] certificate signing request csr-qf2tq is approved, waiting to be issued Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.167812 4768 csr.go:257] certificate signing request csr-qf2tq is issued Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.177580 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:23 crc kubenswrapper[4768]: E0223 18:36:23.177889 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:23.67787738 +0000 UTC m=+179.068363180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.192939 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:23 crc kubenswrapper[4768]: W0223 18:36:23.205774 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84338b9c_fbd7_4987_95d7_21a4d09e2b05.slice/crio-3e6d9159d71e5c7d66ce791e556a3f22aea1dd18f7dd78bd34b4ed5d5d0c5b1f WatchSource:0}: Error finding container 3e6d9159d71e5c7d66ce791e556a3f22aea1dd18f7dd78bd34b4ed5d5d0c5b1f: Status 404 returned error can't find the container with id 3e6d9159d71e5c7d66ce791e556a3f22aea1dd18f7dd78bd34b4ed5d5d0c5b1f Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.209877 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:23 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:23 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:23 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.209932 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.241183 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4hmsw" podStartSLOduration=118.24116945 podStartE2EDuration="1m58.24116945s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.205045397 +0000 UTC m=+178.595531187" watchObservedRunningTime="2026-02-23 18:36:23.24116945 +0000 UTC m=+178.631655250" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.278929 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:23 crc kubenswrapper[4768]: E0223 18:36:23.279229 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:23.779212057 +0000 UTC m=+179.169697857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.325888 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2tb29" podStartSLOduration=118.325868959 podStartE2EDuration="1m58.325868959s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.322768034 +0000 UTC m=+178.713253834" watchObservedRunningTime="2026-02-23 18:36:23.325868959 +0000 UTC m=+178.716354759" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.383925 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:23 crc kubenswrapper[4768]: E0223 18:36:23.384291 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:23.884257674 +0000 UTC m=+179.274743484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.465832 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" podStartSLOduration=118.465805536 podStartE2EDuration="1m58.465805536s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.403997867 +0000 UTC m=+178.794483677" watchObservedRunningTime="2026-02-23 18:36:23.465805536 +0000 UTC m=+178.856291336" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.468633 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nnn8b" podStartSLOduration=118.468622224 podStartE2EDuration="1m58.468622224s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.46557059 +0000 UTC m=+178.856056410" watchObservedRunningTime="2026-02-23 18:36:23.468622224 +0000 UTC m=+178.859108024" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.486714 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:23 crc kubenswrapper[4768]: E0223 18:36:23.486929 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:23.986904627 +0000 UTC m=+179.377390427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.487117 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:23 crc kubenswrapper[4768]: E0223 18:36:23.487487 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:23.987477702 +0000 UTC m=+179.377963502 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.505533 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4hs2n" podStartSLOduration=118.505518788 podStartE2EDuration="1m58.505518788s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.504186702 +0000 UTC m=+178.894672502" watchObservedRunningTime="2026-02-23 18:36:23.505518788 +0000 UTC m=+178.896004588" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.572079 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" podStartSLOduration=118.572053868 podStartE2EDuration="1m58.572053868s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.570055982 +0000 UTC m=+178.960541782" watchObservedRunningTime="2026-02-23 18:36:23.572053868 +0000 UTC m=+178.962539668" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.588327 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:23 crc kubenswrapper[4768]: E0223 18:36:23.588676 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:24.088654844 +0000 UTC m=+179.479140644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.644324 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qv5qk" event={"ID":"a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc","Type":"ContainerStarted","Data":"a464eef8070bd6e13cf6382fc6546fae9b288e96a2ddf2e5010dd3e6d909c445"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.687594 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-phnrg" event={"ID":"cd763cbb-7bd3-4384-b30e-ee24938ed653","Type":"ContainerStarted","Data":"5c5addeb4b8ea62e46da606bbecedc9dce17ee8ca84bfa75d3c0c373c62d63f9"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.690177 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:23 crc kubenswrapper[4768]: E0223 18:36:23.690592 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:24.190579236 +0000 UTC m=+179.581065036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.713582 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" event={"ID":"3cff9f42-aeae-4c76-a542-75cc5c37254a","Type":"ContainerStarted","Data":"15f76464bbb56246cd9b63990ebec3db8b69520b58dcfac7e6a7cad6f461b523"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.736050 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" podStartSLOduration=117.736026336 podStartE2EDuration="1m57.736026336s" podCreationTimestamp="2026-02-23 18:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.73323753 +0000 UTC m=+179.123723330" watchObservedRunningTime="2026-02-23 18:36:23.736026336 +0000 UTC m=+179.126512136" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.747564 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-v9856" podStartSLOduration=118.747544933 podStartE2EDuration="1m58.747544933s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.704726085 +0000 UTC m=+179.095211885" watchObservedRunningTime="2026-02-23 18:36:23.747544933 +0000 UTC m=+179.138030743" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.763932 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" event={"ID":"63b7fc38-b496-4673-9912-0b7c1018962b","Type":"ContainerStarted","Data":"19ba4232557c5b29ba18bb917f8bee7747e807314e736aeb5647da09cb5dbc90"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.769876 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tllsr" event={"ID":"056782af-3e2e-4c24-a9c6-28c7acf1834b","Type":"ContainerStarted","Data":"50033a96327680e22a86afa78ec346c45ff2ac3eed47e6ee5f6d978f5958af2e"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.776922 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-94drr" event={"ID":"4bc8243b-25c6-4117-b4eb-470f5ba127e3","Type":"ContainerStarted","Data":"c544167157f4d067a8f3e38f75349560cdaaeca2cb31ff1187d75675c195b010"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.778312 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-94drr" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.778439 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-94drr" event={"ID":"4bc8243b-25c6-4117-b4eb-470f5ba127e3","Type":"ContainerStarted","Data":"9f977ddf4036d88496186298e2d51412fe3d2df94056c6d754a21c52b495e4b0"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.780078 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" event={"ID":"4ddb8168-a234-4a98-9feb-3301169affe9","Type":"ContainerStarted","Data":"bac3086d9bfb737013e006fb1500e3aadabb961c90eed10e88b011f68dd099a8"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.780629 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" event={"ID":"4ddb8168-a234-4a98-9feb-3301169affe9","Type":"ContainerStarted","Data":"811cb1d8f3aad6c06053bd94d42544e0ab41199accee17a409781536031f1c66"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.791455 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.793569 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" event={"ID":"84338b9c-fbd7-4987-95d7-21a4d09e2b05","Type":"ContainerStarted","Data":"3e6d9159d71e5c7d66ce791e556a3f22aea1dd18f7dd78bd34b4ed5d5d0c5b1f"} Feb 23 18:36:23 crc kubenswrapper[4768]: E0223 18:36:23.793713 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:24.293692061 +0000 UTC m=+179.684177861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.802818 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-94drr" podStartSLOduration=5.802800041 podStartE2EDuration="5.802800041s" podCreationTimestamp="2026-02-23 18:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.801816755 +0000 UTC m=+179.192302565" watchObservedRunningTime="2026-02-23 18:36:23.802800041 +0000 UTC m=+179.193285841" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.809345 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-s56mb" podStartSLOduration=118.809332072 podStartE2EDuration="1m58.809332072s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.770483113 +0000 UTC m=+179.160968913" watchObservedRunningTime="2026-02-23 18:36:23.809332072 +0000 UTC m=+179.199817872" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.814604 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" event={"ID":"ff00ff81-a3dd-477f-98c7-a99d0d462f57","Type":"ContainerStarted","Data":"629af15b7b81f782ce812f0a42d1c3590753451eacbb5ca8aa5317fd05d53f95"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.821157 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" event={"ID":"ed12b20a-27bb-4560-a46e-68302e06f373","Type":"ContainerStarted","Data":"beb3f12cc644cbcc210a414d1a546df5e07d07bd848c82d8e89e940262bf1925"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.840404 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" event={"ID":"1609f0af-9131-40b0-8723-ed972292effb","Type":"ContainerStarted","Data":"d6a0cfb47c8ea3c77f84af72a793ab62ef37f9c9b4cb9d345169dd96570c2ade"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.840457 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" event={"ID":"1609f0af-9131-40b0-8723-ed972292effb","Type":"ContainerStarted","Data":"72da8272772af712445b5e1cc884009f6462b4552d9549a8f569c90c31edcf43"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.845584 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x58kc" event={"ID":"dc337bf7-2539-47f2-a100-a0e47b747abc","Type":"ContainerStarted","Data":"642341269ac2507ace0790fe32b368741235f4d8797615c4d695ac224a2bbb51"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.853419 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" event={"ID":"794d73ce-3e95-4492-a64d-4ef84a11d014","Type":"ContainerStarted","Data":"f318c8c53435e094caf3a8b029401de6eb278bcd3ce4edc0914f21751c42627e"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.859460 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2q4n2" event={"ID":"4efae636-8579-4696-b7a7-91c925fdca48","Type":"ContainerStarted","Data":"9460b1ad46a69d2caaa75a9c64c13b3e15f2c13dfa9784b48e31dff695c4ecba"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.865563 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-22nrg" podStartSLOduration=117.865546177 podStartE2EDuration="1m57.865546177s" podCreationTimestamp="2026-02-23 18:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.864615122 +0000 UTC m=+179.255100922" watchObservedRunningTime="2026-02-23 18:36:23.865546177 +0000 UTC m=+179.256031977" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.869508 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" event={"ID":"dbed104c-291d-45f5-b41d-99814829422e","Type":"ContainerStarted","Data":"572031507feda3505a8da02af6e84219f377e32eecd91fba14e7ba6e9946f2ef"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.869597 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" event={"ID":"dbed104c-291d-45f5-b41d-99814829422e","Type":"ContainerStarted","Data":"a0a9b40c3ddec23e2101d3aecbb79298195c2d4ca7330534df86ff57563493b1"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.876697 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" event={"ID":"a491b22c-d857-469a-830d-791d53b4ccad","Type":"ContainerStarted","Data":"54fc724eef8bea98854e3a1a4cc7d3b245fc8aca99d7b93c826af85f76d060a7"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.887288 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" event={"ID":"62bc1d94-f1c8-4c29-ab4f-becc5775876a","Type":"ContainerStarted","Data":"0a3f0e1452c3508baa2db0c43ffb64e4f6d602b3c85ec90b46d792efadc24ff6"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.898484 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:23 crc kubenswrapper[4768]: E0223 18:36:23.899770 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:24.399747267 +0000 UTC m=+179.790233067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.901682 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" event={"ID":"15b52cf1-2c86-4747-9be1-a690a5a125ca","Type":"ContainerStarted","Data":"7de5f1af212a091a57acdf065c0007ebe6cd58030e5ee634441a7fc947af4dd3"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.901739 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" event={"ID":"15b52cf1-2c86-4747-9be1-a690a5a125ca","Type":"ContainerStarted","Data":"4c41cce77be23af169a230719063b7800b79093d805ab3549bff998860b9b68d"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.924393 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-b8c6c" podStartSLOduration=118.924375144 podStartE2EDuration="1m58.924375144s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.892060686 +0000 UTC m=+179.282546486" watchObservedRunningTime="2026-02-23 18:36:23.924375144 +0000 UTC m=+179.314860944" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.973557 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" event={"ID":"fdee7aa6-3507-4d5c-8039-646b66ece997","Type":"ContainerStarted","Data":"3486c8584102266c64170bb7ae4b990d75c1cb457195a594c946b53971072c36"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.973944 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" event={"ID":"fdee7aa6-3507-4d5c-8039-646b66ece997","Type":"ContainerStarted","Data":"eef078cb592cb12afd01f0f340e367da49065812041eef10cc6ac8bd98f23b59"} Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.980686 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" podStartSLOduration=118.980664132 podStartE2EDuration="1m58.980664132s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.971079969 +0000 UTC m=+179.361565759" watchObservedRunningTime="2026-02-23 18:36:23.980664132 +0000 UTC m=+179.371149932" Feb 23 18:36:23 crc kubenswrapper[4768]: I0223 18:36:23.981301 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2q4n2" podStartSLOduration=117.981296339 podStartE2EDuration="1m57.981296339s" podCreationTimestamp="2026-02-23 18:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:23.92457246 +0000 UTC m=+179.315058260" watchObservedRunningTime="2026-02-23 18:36:23.981296339 +0000 UTC m=+179.371782139" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.009941 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.011148 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:24.51112546 +0000 UTC m=+179.901611250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.011350 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-tp2pv" podStartSLOduration=118.011332515 podStartE2EDuration="1m58.011332515s" podCreationTimestamp="2026-02-23 18:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:24.009249568 +0000 UTC m=+179.399735358" watchObservedRunningTime="2026-02-23 18:36:24.011332515 +0000 UTC m=+179.401818315" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.047629 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" event={"ID":"587bbe6d-3e0d-42bb-92c9-bb7ff7f3c8d8","Type":"ContainerStarted","Data":"3a8fa8b85539196cf2f7dc59f4872abbf6fcff0be88e82b899ef8bd68f87aaf1"} Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.060979 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-kzq46" podStartSLOduration=119.06096171 podStartE2EDuration="1m59.06096171s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:24.059648783 +0000 UTC m=+179.450134603" watchObservedRunningTime="2026-02-23 18:36:24.06096171 +0000 UTC m=+179.451447510" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.067787 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" event={"ID":"1496fda2-941d-48a9-8bdd-05ee6f0d235a","Type":"ContainerStarted","Data":"a66d917d7a5e019397cc11801773d328be5ea77a5a2624dd54d6efe5306a2f0b"} Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.067895 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" event={"ID":"1496fda2-941d-48a9-8bdd-05ee6f0d235a","Type":"ContainerStarted","Data":"abccaed471d16aabbfd689a14e30ee21bea7b48a7133ac4dba7f1d3b6d2db108"} Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.087691 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx" event={"ID":"30b720f3-fda0-41f1-bca9-e52fe84a3535","Type":"ContainerStarted","Data":"b38734e4efc2eaa0fbf4a7a82b66b646d63468da8ec6331efbf0dff265682cb1"} Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.098963 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-zh7hl" podStartSLOduration=119.098944784 podStartE2EDuration="1m59.098944784s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:24.097502985 +0000 UTC m=+179.487988785" watchObservedRunningTime="2026-02-23 18:36:24.098944784 +0000 UTC m=+179.489430584" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.117639 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.118466 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6qzzx" podStartSLOduration=118.11845368 podStartE2EDuration="1m58.11845368s" podCreationTimestamp="2026-02-23 18:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:24.117003591 +0000 UTC m=+179.507489391" watchObservedRunningTime="2026-02-23 18:36:24.11845368 +0000 UTC m=+179.508939490" Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.120802 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:24.620786204 +0000 UTC m=+180.011272004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.141851 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" event={"ID":"4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de","Type":"ContainerStarted","Data":"557279892dc7a241c809d830e9a22c4d3a65272bfc5fdcf18a0689953df121d2"} Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.154090 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" event={"ID":"3af14597-4b62-431a-939a-2e7c3592a896","Type":"ContainerStarted","Data":"c3f0a19aa6c1653f7054261b4d2301004d535ed9f9bf53bcc8a0cb127bb7b167"} Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.156157 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ggqts" event={"ID":"2f9a97e9-9855-4a63-8d90-8ee30404ab5f","Type":"ContainerStarted","Data":"48af656b853e5aab03f81fe0e8a4665ac80bdb93204a20058c7d77b1c6a4ffa3"} Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.158939 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" event={"ID":"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9","Type":"ContainerStarted","Data":"498280ccd22e1f496b6bba28d3e572282bed5fed552679468f00c4dd0815cea2"} Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.160988 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vn4nn" podStartSLOduration=118.160973319 podStartE2EDuration="1m58.160973319s" podCreationTimestamp="2026-02-23 18:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:24.158239744 +0000 UTC m=+179.548725544" watchObservedRunningTime="2026-02-23 18:36:24.160973319 +0000 UTC m=+179.551459119" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.179146 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-23 18:31:23 +0000 UTC, rotation deadline is 2026-11-20 23:38:50.86433243 +0000 UTC Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.183832 4768 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6485h2m26.680506793s for next certificate rotation Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.183786 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" podStartSLOduration=118.183772696 podStartE2EDuration="1m58.183772696s" podCreationTimestamp="2026-02-23 18:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:24.182604554 +0000 UTC m=+179.573090354" watchObservedRunningTime="2026-02-23 18:36:24.183772696 +0000 UTC m=+179.574258496" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.182881 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" event={"ID":"78cada77-daaa-4a63-acf4-12499986ea25","Type":"ContainerStarted","Data":"b176d88644d637fd1e734b72827eca993f33f2be3d7652df7ed1597b4f7f7ae6"} Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.196025 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.200087 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:24 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:24 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:24 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.201065 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.206358 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-n2bkb" event={"ID":"5bfb7d7a-34c4-42fa-ae0e-9f9cd54e7121","Type":"ContainerStarted","Data":"c8561445627c001e426a18636dd7bcd43d381553836d38ed30de1e2bfd336993"} Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.207844 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-ggqts" podStartSLOduration=119.207825937 podStartE2EDuration="1m59.207825937s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:24.207077237 +0000 UTC m=+179.597563027" watchObservedRunningTime="2026-02-23 18:36:24.207825937 +0000 UTC m=+179.598311737" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.219027 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.219548 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:24.719523909 +0000 UTC m=+180.110009709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.219651 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.220976 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:24.720961089 +0000 UTC m=+180.111446879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.225515 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" event={"ID":"8698eec3-b444-4838-bfff-36fb054e8578","Type":"ContainerStarted","Data":"4a61958666a323bd34f11b67b810d151979fbfc6403d4c506f6888af8f28d685"} Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.225937 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.244646 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ph4l" event={"ID":"446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61","Type":"ContainerStarted","Data":"15d8b74c31fe6239109eadd2e212de7501264caa9a9140f828ff3e0a328532a0"} Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.253408 4768 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pg8m4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.253463 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" podUID="8698eec3-b444-4838-bfff-36fb054e8578" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.266556 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.285744 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-n2bkb" podStartSLOduration=6.285716809 podStartE2EDuration="6.285716809s" podCreationTimestamp="2026-02-23 18:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:24.281618236 +0000 UTC m=+179.672104036" watchObservedRunningTime="2026-02-23 18:36:24.285716809 +0000 UTC m=+179.676202609" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.286629 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" podStartSLOduration=119.286621144 podStartE2EDuration="1m59.286621144s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:24.246994974 +0000 UTC m=+179.637480774" watchObservedRunningTime="2026-02-23 18:36:24.286621144 +0000 UTC m=+179.677106944" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.320570 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.320842 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:24.820804463 +0000 UTC m=+180.211290263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.321625 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.336614 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:24.836592598 +0000 UTC m=+180.227078398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.394187 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" podStartSLOduration=118.394161561 podStartE2EDuration="1m58.394161561s" podCreationTimestamp="2026-02-23 18:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:24.390679414 +0000 UTC m=+179.781165214" watchObservedRunningTime="2026-02-23 18:36:24.394161561 +0000 UTC m=+179.784647361" Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.423411 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:24.923372274 +0000 UTC m=+180.313858074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.423909 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.426298 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.428626 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:24.928612578 +0000 UTC m=+180.319098378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.529380 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.529763 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:25.029744338 +0000 UTC m=+180.420230138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.576726 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.630820 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.631146 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:25.131128786 +0000 UTC m=+180.521614596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.733115 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.733771 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:25.233753468 +0000 UTC m=+180.624239268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.835323 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.835681 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:25.335669759 +0000 UTC m=+180.726155559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.936687 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.936837 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:25.43681622 +0000 UTC m=+180.827302020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:24 crc kubenswrapper[4768]: I0223 18:36:24.936971 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:24 crc kubenswrapper[4768]: E0223 18:36:24.937282 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:25.437274242 +0000 UTC m=+180.827760042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.037987 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:25 crc kubenswrapper[4768]: E0223 18:36:25.038399 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:25.538379263 +0000 UTC m=+180.928865063 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.139673 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:25 crc kubenswrapper[4768]: E0223 18:36:25.140325 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:25.640311605 +0000 UTC m=+181.030797405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.197159 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:25 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:25 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:25 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.197212 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.240636 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:25 crc kubenswrapper[4768]: E0223 18:36:25.244617 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:25.744255613 +0000 UTC m=+181.134741413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.264746 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" event={"ID":"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9","Type":"ContainerStarted","Data":"d70e35de6e549780a843571e5c79be0616ec62e51a36214a1fe097b171345481"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.265847 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.269372 4768 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-2fhkt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.269411 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" podUID="9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.270425 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tllsr" event={"ID":"056782af-3e2e-4c24-a9c6-28c7acf1834b","Type":"ContainerStarted","Data":"3185d277cdad2f8cf3d75bd9f95b1c6619986f88fb4a2636bad5c0779830290e"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.271861 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" event={"ID":"3cff9f42-aeae-4c76-a542-75cc5c37254a","Type":"ContainerStarted","Data":"b3ff17088e7daa77067d73e5e6d823f55e6cc109603a480911a8ccdc188f0b4a"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.272212 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.273088 4768 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-r7fm5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.273126 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" podUID="3cff9f42-aeae-4c76-a542-75cc5c37254a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.274325 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-ggqts" event={"ID":"2f9a97e9-9855-4a63-8d90-8ee30404ab5f","Type":"ContainerStarted","Data":"3cfbc6e9dd8c5f2594a880704d6cc13ed1bb0c0069bbf6ff180b19d88042a671"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.277528 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" event={"ID":"ff00ff81-a3dd-477f-98c7-a99d0d462f57","Type":"ContainerStarted","Data":"308cd15313fe1e1245d451f6fbed41be32f9c3dbdee2cc38fb2dd350dbf2d671"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.281978 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" event={"ID":"84338b9c-fbd7-4987-95d7-21a4d09e2b05","Type":"ContainerStarted","Data":"9b9f2c44559732ca9e0c866ae21640fea75cb2c0372ac0be12e0c4769d0ef3bb"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.282132 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.283730 4768 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-q87hn container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.283778 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" podUID="84338b9c-fbd7-4987-95d7-21a4d09e2b05" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.289242 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ph4l" event={"ID":"446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61","Type":"ContainerStarted","Data":"c8df8db11c7b18b5a97d6e52d8a8b954ab311f31a57934fb73f0eaf89c04c54c"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.289300 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ph4l" event={"ID":"446ea39f-c8e4-4ffa-b8e8-7cf6ba765c61","Type":"ContainerStarted","Data":"5f930268fa4863390755a9c0ebb058fd9a5e7929e59d7aa06ee59abda3ca69b8"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.291746 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" event={"ID":"1496fda2-941d-48a9-8bdd-05ee6f0d235a","Type":"ContainerStarted","Data":"7da7a2473505a7646841b44bd8d466930696eeced8ce316037c2405e5054f72d"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.298915 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" event={"ID":"a491b22c-d857-469a-830d-791d53b4ccad","Type":"ContainerStarted","Data":"5fedc8f7a1da95f436d46851bcee987b50bb40494c58bd75a2320a656fb8b793"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.299596 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.300706 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" podStartSLOduration=120.300693724 podStartE2EDuration="2m0.300693724s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:25.298899125 +0000 UTC m=+180.689384915" watchObservedRunningTime="2026-02-23 18:36:25.300693724 +0000 UTC m=+180.691179524" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.301827 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" event={"ID":"63b7fc38-b496-4673-9912-0b7c1018962b","Type":"ContainerStarted","Data":"025c63393935f5b92665570392155c04b9fb492d84f41da0ed752b79e93e1b08"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.302068 4768 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-57sqf container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.302122 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" podUID="a491b22c-d857-469a-830d-791d53b4ccad" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.303879 4768 generic.go:334] "Generic (PLEG): container finished" podID="dc337bf7-2539-47f2-a100-a0e47b747abc" containerID="c3c0591238c8f9f8f2eef7106c9482e00ea159f3dbe654df435ffb5ec5981511" exitCode=0 Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.303936 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x58kc" event={"ID":"dc337bf7-2539-47f2-a100-a0e47b747abc","Type":"ContainerDied","Data":"c3c0591238c8f9f8f2eef7106c9482e00ea159f3dbe654df435ffb5ec5981511"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.326693 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" event={"ID":"4ddb8168-a234-4a98-9feb-3301169affe9","Type":"ContainerStarted","Data":"297cf36991dcd866423a48e61b4f6fd65cf8b7ebfccfbe01e128ac60547521db"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.326832 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.330328 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" podStartSLOduration=119.330310139 podStartE2EDuration="1m59.330310139s" podCreationTimestamp="2026-02-23 18:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:25.329749174 +0000 UTC m=+180.720234964" watchObservedRunningTime="2026-02-23 18:36:25.330310139 +0000 UTC m=+180.720795939" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.357064 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:25 crc kubenswrapper[4768]: E0223 18:36:25.360628 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:25.860607642 +0000 UTC m=+181.251093442 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.372601 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" event={"ID":"62bc1d94-f1c8-4c29-ab4f-becc5775876a","Type":"ContainerStarted","Data":"67d03f6afbe4bb7db03072fd60b49c60413c4534321ddd8b0431013eb3773529"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.398900 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bdws4" podStartSLOduration=120.398875874 podStartE2EDuration="2m0.398875874s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:25.385329571 +0000 UTC m=+180.775815371" watchObservedRunningTime="2026-02-23 18:36:25.398875874 +0000 UTC m=+180.789361674" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.401643 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" event={"ID":"ed12b20a-27bb-4560-a46e-68302e06f373","Type":"ContainerStarted","Data":"762046dade8070646de2e790458e89b5ed45ff366a778f2167a2b6980b83252e"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.422691 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" podStartSLOduration=119.422677109 podStartE2EDuration="1m59.422677109s" podCreationTimestamp="2026-02-23 18:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:25.420789046 +0000 UTC m=+180.811274846" watchObservedRunningTime="2026-02-23 18:36:25.422677109 +0000 UTC m=+180.813162909" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.434386 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" event={"ID":"8698eec3-b444-4838-bfff-36fb054e8578","Type":"ContainerStarted","Data":"c8872f8383e79a3cd9859bf38641196ed64208aa85cacd7a9adaf9e1ba5528cb"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.435814 4768 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pg8m4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.435876 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" podUID="8698eec3-b444-4838-bfff-36fb054e8578" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.460998 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:25 crc kubenswrapper[4768]: E0223 18:36:25.462514 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:25.962486953 +0000 UTC m=+181.352972763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.477635 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qv5qk" event={"ID":"a9009d91-c7b3-4e1c-b6e8-6f2cc9065abc","Type":"ContainerStarted","Data":"4e559f5fc8d50032131a3160dbfbc672399e470d5dd48e5b151cd009c4caba63"} Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.497344 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" podStartSLOduration=119.49731801 podStartE2EDuration="1m59.49731801s" podCreationTimestamp="2026-02-23 18:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:25.456254772 +0000 UTC m=+180.846740572" watchObservedRunningTime="2026-02-23 18:36:25.49731801 +0000 UTC m=+180.887803810" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.546807 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6h7t4" podStartSLOduration=120.546787871 podStartE2EDuration="2m0.546787871s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:25.499013477 +0000 UTC m=+180.889499277" watchObservedRunningTime="2026-02-23 18:36:25.546787871 +0000 UTC m=+180.937273661" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.547454 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ph4l" podStartSLOduration=120.547448989 podStartE2EDuration="2m0.547448989s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:25.544808846 +0000 UTC m=+180.935294646" watchObservedRunningTime="2026-02-23 18:36:25.547448989 +0000 UTC m=+180.937934789" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.565465 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:25 crc kubenswrapper[4768]: E0223 18:36:25.568503 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:26.068486857 +0000 UTC m=+181.458972657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.573350 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-24p7p" podStartSLOduration=120.57333789 podStartE2EDuration="2m0.57333789s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:25.571659585 +0000 UTC m=+180.962145385" watchObservedRunningTime="2026-02-23 18:36:25.57333789 +0000 UTC m=+180.963823690" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.616233 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.616533 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.660621 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" podStartSLOduration=119.6606008 podStartE2EDuration="1m59.6606008s" podCreationTimestamp="2026-02-23 18:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:25.605714441 +0000 UTC m=+180.996200241" watchObservedRunningTime="2026-02-23 18:36:25.6606008 +0000 UTC m=+181.051086590" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.667998 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:25 crc kubenswrapper[4768]: E0223 18:36:25.668482 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:26.168463795 +0000 UTC m=+181.558949595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.682312 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-x8lwx" podStartSLOduration=120.682289196 podStartE2EDuration="2m0.682289196s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:25.681568356 +0000 UTC m=+181.072054156" watchObservedRunningTime="2026-02-23 18:36:25.682289196 +0000 UTC m=+181.072774996" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.724806 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-n5vz6" podStartSLOduration=120.724788465 podStartE2EDuration="2m0.724788465s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:25.723718105 +0000 UTC m=+181.114203905" watchObservedRunningTime="2026-02-23 18:36:25.724788465 +0000 UTC m=+181.115274265" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.772898 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:25 crc kubenswrapper[4768]: E0223 18:36:25.773488 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:26.273470753 +0000 UTC m=+181.663956553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.779884 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qv5qk" podStartSLOduration=7.779857488 podStartE2EDuration="7.779857488s" podCreationTimestamp="2026-02-23 18:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:25.772540288 +0000 UTC m=+181.163026088" watchObservedRunningTime="2026-02-23 18:36:25.779857488 +0000 UTC m=+181.170343288" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.815352 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2fhkt"] Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.873789 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:25 crc kubenswrapper[4768]: E0223 18:36:25.874132 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:26.37411515 +0000 UTC m=+181.764600950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.906572 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m"] Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.977614 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dxkwc" Feb 23 18:36:25 crc kubenswrapper[4768]: I0223 18:36:25.979471 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:25 crc kubenswrapper[4768]: E0223 18:36:25.979801 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:26.479787276 +0000 UTC m=+181.870273076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.005514 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.080385 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:26 crc kubenswrapper[4768]: E0223 18:36:26.081021 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:26.580998488 +0000 UTC m=+181.971484288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.182965 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:26 crc kubenswrapper[4768]: E0223 18:36:26.183321 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:26.683308541 +0000 UTC m=+182.073794341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.199141 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:26 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:26 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:26 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.199214 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.285809 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:26 crc kubenswrapper[4768]: E0223 18:36:26.286270 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:26.78621229 +0000 UTC m=+182.176698100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.388100 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:26 crc kubenswrapper[4768]: E0223 18:36:26.389199 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:26.889169891 +0000 UTC m=+182.279655691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.484984 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x58kc" event={"ID":"dc337bf7-2539-47f2-a100-a0e47b747abc","Type":"ContainerStarted","Data":"384bb9d2d09d03dd6f3811d0114d82576ff4eae52db507690ba4c30188491abf"} Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.487196 4768 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-r7fm5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.487290 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" podUID="3cff9f42-aeae-4c76-a542-75cc5c37254a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.487495 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-x58kc" event={"ID":"dc337bf7-2539-47f2-a100-a0e47b747abc","Type":"ContainerStarted","Data":"7302b61e3d37ab0c1d698b93667ba2d23db88a34e509424d6e424cd8a605c544"} Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.489358 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" podUID="c36b0ac3-8286-4df2-87cc-afc1edd2a19b" containerName="route-controller-manager" containerID="cri-o://d289fe43099e0b3e4bfcf6473a02999e32bca7f6a108e186c77d170f7cee6f50" gracePeriod=30 Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.489633 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:26 crc kubenswrapper[4768]: E0223 18:36:26.489868 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:26.989840869 +0000 UTC m=+182.380326669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.490177 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:26 crc kubenswrapper[4768]: E0223 18:36:26.490515 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:26.990502677 +0000 UTC m=+182.380988477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.504939 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-sqjgc" Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.508829 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-57sqf" Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.514510 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.549753 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-q87hn" Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.569335 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-x58kc" podStartSLOduration=121.569303303 podStartE2EDuration="2m1.569303303s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:26.528414789 +0000 UTC m=+181.918900599" watchObservedRunningTime="2026-02-23 18:36:26.569303303 +0000 UTC m=+181.959789103" Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.591789 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:26 crc kubenswrapper[4768]: E0223 18:36:26.593393 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:27.093346295 +0000 UTC m=+182.483832105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.695987 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:26 crc kubenswrapper[4768]: E0223 18:36:26.696365 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:27.196352176 +0000 UTC m=+182.586837976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.722405 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pg8m4" Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.797763 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:26 crc kubenswrapper[4768]: E0223 18:36:26.798041 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:27.298022152 +0000 UTC m=+182.688507952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.900059 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:26 crc kubenswrapper[4768]: E0223 18:36:26.900432 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:27.400419407 +0000 UTC m=+182.790905207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.907634 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.907701 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.955480 4768 patch_prober.go:28] interesting pod/apiserver-76f77b778f-x58kc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.34:8443/livez\": dial tcp 10.217.0.34:8443: connect: connection refused" start-of-body= Feb 23 18:36:26 crc kubenswrapper[4768]: I0223 18:36:26.955733 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-x58kc" podUID="dc337bf7-2539-47f2-a100-a0e47b747abc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.34:8443/livez\": dial tcp 10.217.0.34:8443: connect: connection refused" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.000737 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:27 crc kubenswrapper[4768]: E0223 18:36:27.001052 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:27.501034553 +0000 UTC m=+182.891520363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.101729 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:27 crc kubenswrapper[4768]: E0223 18:36:27.102076 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:27.602063071 +0000 UTC m=+182.992548871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.105761 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.199045 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:27 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:27 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:27 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.199122 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.202497 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72nlp\" (UniqueName: \"kubernetes.io/projected/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-kube-api-access-72nlp\") pod \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.202576 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-serving-cert\") pod \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.202630 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-config\") pod \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.202733 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.202785 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-client-ca\") pod \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\" (UID: \"c36b0ac3-8286-4df2-87cc-afc1edd2a19b\") " Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.202940 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs\") pod \"network-metrics-daemon-9s8hm\" (UID: \"1bcfbee2-d95a-4f58-b436-5233d3691ee8\") " pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:36:27 crc kubenswrapper[4768]: E0223 18:36:27.203091 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:27.703061348 +0000 UTC m=+183.093547148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.203442 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-client-ca" (OuterVolumeSpecName: "client-ca") pod "c36b0ac3-8286-4df2-87cc-afc1edd2a19b" (UID: "c36b0ac3-8286-4df2-87cc-afc1edd2a19b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.203456 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-config" (OuterVolumeSpecName: "config") pod "c36b0ac3-8286-4df2-87cc-afc1edd2a19b" (UID: "c36b0ac3-8286-4df2-87cc-afc1edd2a19b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.209240 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.222211 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1bcfbee2-d95a-4f58-b436-5233d3691ee8-metrics-certs\") pod \"network-metrics-daemon-9s8hm\" (UID: \"1bcfbee2-d95a-4f58-b436-5233d3691ee8\") " pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.223587 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-kube-api-access-72nlp" (OuterVolumeSpecName: "kube-api-access-72nlp") pod "c36b0ac3-8286-4df2-87cc-afc1edd2a19b" (UID: "c36b0ac3-8286-4df2-87cc-afc1edd2a19b"). InnerVolumeSpecName "kube-api-access-72nlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.224023 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c36b0ac3-8286-4df2-87cc-afc1edd2a19b" (UID: "c36b0ac3-8286-4df2-87cc-afc1edd2a19b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.304860 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:27 crc kubenswrapper[4768]: E0223 18:36:27.305295 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:27.805276618 +0000 UTC m=+183.195762418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.305532 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.305544 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.305553 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.305570 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72nlp\" (UniqueName: \"kubernetes.io/projected/c36b0ac3-8286-4df2-87cc-afc1edd2a19b-kube-api-access-72nlp\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.371733 4768 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.406461 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:27 crc kubenswrapper[4768]: E0223 18:36:27.406779 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:27.906761338 +0000 UTC m=+183.297247138 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.448792 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.454459 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9s8hm" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.491554 4768 generic.go:334] "Generic (PLEG): container finished" podID="c36b0ac3-8286-4df2-87cc-afc1edd2a19b" containerID="d289fe43099e0b3e4bfcf6473a02999e32bca7f6a108e186c77d170f7cee6f50" exitCode=0 Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.491609 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" event={"ID":"c36b0ac3-8286-4df2-87cc-afc1edd2a19b","Type":"ContainerDied","Data":"d289fe43099e0b3e4bfcf6473a02999e32bca7f6a108e186c77d170f7cee6f50"} Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.491643 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.491669 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m" event={"ID":"c36b0ac3-8286-4df2-87cc-afc1edd2a19b","Type":"ContainerDied","Data":"5b2f5c89dd9614632067fca5a0fb1ba66e7734625567ca186c10b4acf6efdf0d"} Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.491695 4768 scope.go:117] "RemoveContainer" containerID="d289fe43099e0b3e4bfcf6473a02999e32bca7f6a108e186c77d170f7cee6f50" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.497553 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tllsr" event={"ID":"056782af-3e2e-4c24-a9c6-28c7acf1834b","Type":"ContainerStarted","Data":"77460470d18bff646b16890c84e142c65e6060b92a98a489b0b120b7b0d2dbe9"} Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.497693 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tllsr" event={"ID":"056782af-3e2e-4c24-a9c6-28c7acf1834b","Type":"ContainerStarted","Data":"dde9910f0a6d60919a9b91262a283d040b7d821409abc526fc891b0bc9077376"} Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.497706 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" podUID="9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9" containerName="controller-manager" containerID="cri-o://d70e35de6e549780a843571e5c79be0616ec62e51a36214a1fe097b171345481" gracePeriod=30 Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.507571 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:27 crc kubenswrapper[4768]: E0223 18:36:27.507967 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:28.00794972 +0000 UTC m=+183.398435520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.520063 4768 scope.go:117] "RemoveContainer" containerID="d289fe43099e0b3e4bfcf6473a02999e32bca7f6a108e186c77d170f7cee6f50" Feb 23 18:36:27 crc kubenswrapper[4768]: E0223 18:36:27.520564 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d289fe43099e0b3e4bfcf6473a02999e32bca7f6a108e186c77d170f7cee6f50\": container with ID starting with d289fe43099e0b3e4bfcf6473a02999e32bca7f6a108e186c77d170f7cee6f50 not found: ID does not exist" containerID="d289fe43099e0b3e4bfcf6473a02999e32bca7f6a108e186c77d170f7cee6f50" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.520607 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d289fe43099e0b3e4bfcf6473a02999e32bca7f6a108e186c77d170f7cee6f50"} err="failed to get container status \"d289fe43099e0b3e4bfcf6473a02999e32bca7f6a108e186c77d170f7cee6f50\": rpc error: code = NotFound desc = could not find container \"d289fe43099e0b3e4bfcf6473a02999e32bca7f6a108e186c77d170f7cee6f50\": container with ID starting with d289fe43099e0b3e4bfcf6473a02999e32bca7f6a108e186c77d170f7cee6f50 not found: ID does not exist" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.525441 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m"] Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.527615 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jslxc"] Feb 23 18:36:27 crc kubenswrapper[4768]: E0223 18:36:27.527795 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c36b0ac3-8286-4df2-87cc-afc1edd2a19b" containerName="route-controller-manager" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.527869 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c36b0ac3-8286-4df2-87cc-afc1edd2a19b" containerName="route-controller-manager" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.527987 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c36b0ac3-8286-4df2-87cc-afc1edd2a19b" containerName="route-controller-manager" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.528818 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.530488 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-nrd6m"] Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.532360 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.543764 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jslxc"] Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.614807 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.615503 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed08d934-3f52-47e6-89a0-16d5481ac4bd-catalog-content\") pod \"certified-operators-jslxc\" (UID: \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\") " pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.615554 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed08d934-3f52-47e6-89a0-16d5481ac4bd-utilities\") pod \"certified-operators-jslxc\" (UID: \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\") " pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.615630 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgc6z\" (UniqueName: \"kubernetes.io/projected/ed08d934-3f52-47e6-89a0-16d5481ac4bd-kube-api-access-dgc6z\") pod \"certified-operators-jslxc\" (UID: \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\") " pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:36:27 crc kubenswrapper[4768]: E0223 18:36:27.616691 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:28.11667354 +0000 UTC m=+183.507159340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.720080 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgc6z\" (UniqueName: \"kubernetes.io/projected/ed08d934-3f52-47e6-89a0-16d5481ac4bd-kube-api-access-dgc6z\") pod \"certified-operators-jslxc\" (UID: \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\") " pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.720156 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.720199 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed08d934-3f52-47e6-89a0-16d5481ac4bd-catalog-content\") pod \"certified-operators-jslxc\" (UID: \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\") " pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.720224 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed08d934-3f52-47e6-89a0-16d5481ac4bd-utilities\") pod \"certified-operators-jslxc\" (UID: \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\") " pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:36:27 crc kubenswrapper[4768]: E0223 18:36:27.720600 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:28.220587077 +0000 UTC m=+183.611072877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.720868 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed08d934-3f52-47e6-89a0-16d5481ac4bd-utilities\") pod \"certified-operators-jslxc\" (UID: \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\") " pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.721000 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed08d934-3f52-47e6-89a0-16d5481ac4bd-catalog-content\") pod \"certified-operators-jslxc\" (UID: \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\") " pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.723357 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xtmth"] Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.724972 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.729291 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.732161 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xtmth"] Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.745253 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgc6z\" (UniqueName: \"kubernetes.io/projected/ed08d934-3f52-47e6-89a0-16d5481ac4bd-kube-api-access-dgc6z\") pod \"certified-operators-jslxc\" (UID: \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\") " pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.750729 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9s8hm"] Feb 23 18:36:27 crc kubenswrapper[4768]: W0223 18:36:27.766222 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bcfbee2_d95a_4f58_b436_5233d3691ee8.slice/crio-44c65a2f5ab9dc4eea78d49bee044320e03bb93f1cb95d1e0f8e3d3e73202939 WatchSource:0}: Error finding container 44c65a2f5ab9dc4eea78d49bee044320e03bb93f1cb95d1e0f8e3d3e73202939: Status 404 returned error can't find the container with id 44c65a2f5ab9dc4eea78d49bee044320e03bb93f1cb95d1e0f8e3d3e73202939 Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.821376 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:27 crc kubenswrapper[4768]: E0223 18:36:27.821608 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:28.321570133 +0000 UTC m=+183.712055933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.821789 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.821828 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09dd656a-5018-48c3-b1ca-0318e0de4161-utilities\") pod \"community-operators-xtmth\" (UID: \"09dd656a-5018-48c3-b1ca-0318e0de4161\") " pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.821892 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09dd656a-5018-48c3-b1ca-0318e0de4161-catalog-content\") pod \"community-operators-xtmth\" (UID: \"09dd656a-5018-48c3-b1ca-0318e0de4161\") " pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.822089 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlm9d\" (UniqueName: \"kubernetes.io/projected/09dd656a-5018-48c3-b1ca-0318e0de4161-kube-api-access-tlm9d\") pod \"community-operators-xtmth\" (UID: \"09dd656a-5018-48c3-b1ca-0318e0de4161\") " pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:36:27 crc kubenswrapper[4768]: E0223 18:36:27.822593 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-23 18:36:28.322584911 +0000 UTC m=+183.713070701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-jdbtb" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.841669 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.917625 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gvspb"] Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.918883 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.926017 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.926464 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09dd656a-5018-48c3-b1ca-0318e0de4161-utilities\") pod \"community-operators-xtmth\" (UID: \"09dd656a-5018-48c3-b1ca-0318e0de4161\") " pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.926505 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09dd656a-5018-48c3-b1ca-0318e0de4161-catalog-content\") pod \"community-operators-xtmth\" (UID: \"09dd656a-5018-48c3-b1ca-0318e0de4161\") " pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.926581 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlm9d\" (UniqueName: \"kubernetes.io/projected/09dd656a-5018-48c3-b1ca-0318e0de4161-kube-api-access-tlm9d\") pod \"community-operators-xtmth\" (UID: \"09dd656a-5018-48c3-b1ca-0318e0de4161\") " pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:36:27 crc kubenswrapper[4768]: E0223 18:36:27.927026 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-23 18:36:28.427008162 +0000 UTC m=+183.817493962 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.927393 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09dd656a-5018-48c3-b1ca-0318e0de4161-utilities\") pod \"community-operators-xtmth\" (UID: \"09dd656a-5018-48c3-b1ca-0318e0de4161\") " pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.927658 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09dd656a-5018-48c3-b1ca-0318e0de4161-catalog-content\") pod \"community-operators-xtmth\" (UID: \"09dd656a-5018-48c3-b1ca-0318e0de4161\") " pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.930724 4768 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-23T18:36:27.371767496Z","Handler":null,"Name":""} Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.956518 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlm9d\" (UniqueName: \"kubernetes.io/projected/09dd656a-5018-48c3-b1ca-0318e0de4161-kube-api-access-tlm9d\") pod \"community-operators-xtmth\" (UID: \"09dd656a-5018-48c3-b1ca-0318e0de4161\") " pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.962453 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.969791 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gvspb"] Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.971387 4768 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 23 18:36:27 crc kubenswrapper[4768]: I0223 18:36:27.971442 4768 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.030372 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhznd\" (UniqueName: \"kubernetes.io/projected/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-kube-api-access-dhznd\") pod \"certified-operators-gvspb\" (UID: \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\") " pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.030467 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.030499 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-catalog-content\") pod \"certified-operators-gvspb\" (UID: \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\") " pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.030563 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-utilities\") pod \"certified-operators-gvspb\" (UID: \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\") " pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.033878 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 18:36:28 crc kubenswrapper[4768]: E0223 18:36:28.034234 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9" containerName="controller-manager" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.034393 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9" containerName="controller-manager" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.034506 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9" containerName="controller-manager" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.034867 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.040909 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.040996 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.045999 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.060295 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.061559 4768 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.061601 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.082198 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8fb885f79-rxl5w"] Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.083199 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.086490 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92"] Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.087476 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.091965 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92"] Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.103688 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.103839 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.103946 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.104077 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.104302 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.104477 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.122596 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8fb885f79-rxl5w"] Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.125123 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b6bbm"] Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.126804 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.136172 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-jdbtb\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.136859 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-client-ca\") pod \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.136918 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-config\") pod \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.137063 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-serving-cert\") pod \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.137110 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-proxy-ca-bundles\") pod \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.137134 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x9tq\" (UniqueName: \"kubernetes.io/projected/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-kube-api-access-8x9tq\") pod \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\" (UID: \"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9\") " Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.137362 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhznd\" (UniqueName: \"kubernetes.io/projected/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-kube-api-access-dhznd\") pod \"certified-operators-gvspb\" (UID: \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\") " pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.137437 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/326c2082-9a4f-4f3e-8baf-fdbf8ff864f0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.137492 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-catalog-content\") pod \"certified-operators-gvspb\" (UID: \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\") " pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.137552 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/326c2082-9a4f-4f3e-8baf-fdbf8ff864f0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.137579 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-utilities\") pod \"certified-operators-gvspb\" (UID: \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\") " pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.138309 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-client-ca" (OuterVolumeSpecName: "client-ca") pod "9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9" (UID: "9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.139119 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-catalog-content\") pod \"certified-operators-gvspb\" (UID: \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\") " pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.143471 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-utilities\") pod \"certified-operators-gvspb\" (UID: \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\") " pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.143742 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9" (UID: "9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.151754 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9" (UID: "9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.152187 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-config" (OuterVolumeSpecName: "config") pod "9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9" (UID: "9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.158019 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b6bbm"] Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.160680 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-kube-api-access-8x9tq" (OuterVolumeSpecName: "kube-api-access-8x9tq") pod "9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9" (UID: "9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9"). InnerVolumeSpecName "kube-api-access-8x9tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.211125 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhznd\" (UniqueName: \"kubernetes.io/projected/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-kube-api-access-dhznd\") pod \"certified-operators-gvspb\" (UID: \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\") " pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.220511 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:28 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:28 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:28 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.220575 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.238380 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.238688 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/326c2082-9a4f-4f3e-8baf-fdbf8ff864f0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.238967 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0142e5c4-e4b5-458f-9b5e-59458769788c-client-ca\") pod \"route-controller-manager-6d6775fbb6-c9t92\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239034 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01af9d70-86d3-4601-a000-1344c4752671-serving-cert\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239056 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5z4n\" (UniqueName: \"kubernetes.io/projected/0142e5c4-e4b5-458f-9b5e-59458769788c-kube-api-access-g5z4n\") pod \"route-controller-manager-6d6775fbb6-c9t92\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239110 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-utilities\") pod \"community-operators-b6bbm\" (UID: \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\") " pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239137 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/326c2082-9a4f-4f3e-8baf-fdbf8ff864f0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239155 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-client-ca\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239193 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-config\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239211 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0142e5c4-e4b5-458f-9b5e-59458769788c-serving-cert\") pod \"route-controller-manager-6d6775fbb6-c9t92\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239265 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcv6d\" (UniqueName: \"kubernetes.io/projected/01af9d70-86d3-4601-a000-1344c4752671-kube-api-access-bcv6d\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239284 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0142e5c4-e4b5-458f-9b5e-59458769788c-config\") pod \"route-controller-manager-6d6775fbb6-c9t92\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239301 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v25jn\" (UniqueName: \"kubernetes.io/projected/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-kube-api-access-v25jn\") pod \"community-operators-b6bbm\" (UID: \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\") " pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239319 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-proxy-ca-bundles\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239382 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-catalog-content\") pod \"community-operators-b6bbm\" (UID: \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\") " pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239467 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239510 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239522 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x9tq\" (UniqueName: \"kubernetes.io/projected/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-kube-api-access-8x9tq\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239531 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.239539 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.240189 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/326c2082-9a4f-4f3e-8baf-fdbf8ff864f0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.243045 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.272781 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/326c2082-9a4f-4f3e-8baf-fdbf8ff864f0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.275649 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.296311 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.306110 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.340534 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-utilities\") pod \"community-operators-b6bbm\" (UID: \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\") " pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.340612 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-client-ca\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.340639 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-config\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.340658 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0142e5c4-e4b5-458f-9b5e-59458769788c-serving-cert\") pod \"route-controller-manager-6d6775fbb6-c9t92\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.340682 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcv6d\" (UniqueName: \"kubernetes.io/projected/01af9d70-86d3-4601-a000-1344c4752671-kube-api-access-bcv6d\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.340701 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0142e5c4-e4b5-458f-9b5e-59458769788c-config\") pod \"route-controller-manager-6d6775fbb6-c9t92\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.340720 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v25jn\" (UniqueName: \"kubernetes.io/projected/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-kube-api-access-v25jn\") pod \"community-operators-b6bbm\" (UID: \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\") " pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.340739 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-proxy-ca-bundles\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.340762 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-catalog-content\") pod \"community-operators-b6bbm\" (UID: \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\") " pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.340801 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0142e5c4-e4b5-458f-9b5e-59458769788c-client-ca\") pod \"route-controller-manager-6d6775fbb6-c9t92\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.340842 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01af9d70-86d3-4601-a000-1344c4752671-serving-cert\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.340864 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5z4n\" (UniqueName: \"kubernetes.io/projected/0142e5c4-e4b5-458f-9b5e-59458769788c-kube-api-access-g5z4n\") pod \"route-controller-manager-6d6775fbb6-c9t92\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.341930 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-config\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.342290 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-utilities\") pod \"community-operators-b6bbm\" (UID: \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\") " pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.342779 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-client-ca\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.343078 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-catalog-content\") pod \"community-operators-b6bbm\" (UID: \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\") " pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.343639 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-proxy-ca-bundles\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.347455 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0142e5c4-e4b5-458f-9b5e-59458769788c-serving-cert\") pod \"route-controller-manager-6d6775fbb6-c9t92\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.347828 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0142e5c4-e4b5-458f-9b5e-59458769788c-config\") pod \"route-controller-manager-6d6775fbb6-c9t92\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.347878 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0142e5c4-e4b5-458f-9b5e-59458769788c-client-ca\") pod \"route-controller-manager-6d6775fbb6-c9t92\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.350165 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01af9d70-86d3-4601-a000-1344c4752671-serving-cert\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.366681 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v25jn\" (UniqueName: \"kubernetes.io/projected/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-kube-api-access-v25jn\") pod \"community-operators-b6bbm\" (UID: \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\") " pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.371751 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcv6d\" (UniqueName: \"kubernetes.io/projected/01af9d70-86d3-4601-a000-1344c4752671-kube-api-access-bcv6d\") pod \"controller-manager-8fb885f79-rxl5w\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.372841 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5z4n\" (UniqueName: \"kubernetes.io/projected/0142e5c4-e4b5-458f-9b5e-59458769788c-kube-api-access-g5z4n\") pod \"route-controller-manager-6d6775fbb6-c9t92\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.386449 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jslxc"] Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.483718 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xtmth"] Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.511991 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.534004 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jslxc" event={"ID":"ed08d934-3f52-47e6-89a0-16d5481ac4bd","Type":"ContainerStarted","Data":"3c4c9f7a34b7d537849a5ae4376d060ea3b336ab8859fe62aee2e040531835d7"} Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.537144 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.544488 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tllsr" event={"ID":"056782af-3e2e-4c24-a9c6-28c7acf1834b","Type":"ContainerStarted","Data":"17469105335d02aa87cf84d9a38d8fef1c0bddd8dfebd35e5cb7583fdaa16879"} Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.547002 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtmth" event={"ID":"09dd656a-5018-48c3-b1ca-0318e0de4161","Type":"ContainerStarted","Data":"f286661f7f7d73b55eb6893e605d1be9afa315959a772874357ae25a5b6f31fc"} Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.551943 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.555106 4768 generic.go:334] "Generic (PLEG): container finished" podID="dbed104c-291d-45f5-b41d-99814829422e" containerID="572031507feda3505a8da02af6e84219f377e32eecd91fba14e7ba6e9946f2ef" exitCode=0 Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.555162 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" event={"ID":"dbed104c-291d-45f5-b41d-99814829422e","Type":"ContainerDied","Data":"572031507feda3505a8da02af6e84219f377e32eecd91fba14e7ba6e9946f2ef"} Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.574477 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gvspb"] Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.576563 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.601565 4768 generic.go:334] "Generic (PLEG): container finished" podID="9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9" containerID="d70e35de6e549780a843571e5c79be0616ec62e51a36214a1fe097b171345481" exitCode=0 Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.601716 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" event={"ID":"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9","Type":"ContainerDied","Data":"d70e35de6e549780a843571e5c79be0616ec62e51a36214a1fe097b171345481"} Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.601777 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" event={"ID":"9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9","Type":"ContainerDied","Data":"498280ccd22e1f496b6bba28d3e572282bed5fed552679468f00c4dd0815cea2"} Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.601819 4768 scope.go:117] "RemoveContainer" containerID="d70e35de6e549780a843571e5c79be0616ec62e51a36214a1fe097b171345481" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.602112 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-2fhkt" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.607308 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tllsr" podStartSLOduration=10.607287035 podStartE2EDuration="10.607287035s" podCreationTimestamp="2026-02-23 18:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:28.579783919 +0000 UTC m=+183.970269719" watchObservedRunningTime="2026-02-23 18:36:28.607287035 +0000 UTC m=+183.997772835" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.619397 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" event={"ID":"1bcfbee2-d95a-4f58-b436-5233d3691ee8","Type":"ContainerStarted","Data":"6a75ba79c755c1387c95a517e6338a7e624dc648c537fc3401909cbb9912f814"} Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.619448 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" event={"ID":"1bcfbee2-d95a-4f58-b436-5233d3691ee8","Type":"ContainerStarted","Data":"44c65a2f5ab9dc4eea78d49bee044320e03bb93f1cb95d1e0f8e3d3e73202939"} Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.632788 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jdbtb"] Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.651694 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2fhkt"] Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.664849 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-2fhkt"] Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.713732 4768 scope.go:117] "RemoveContainer" containerID="d70e35de6e549780a843571e5c79be0616ec62e51a36214a1fe097b171345481" Feb 23 18:36:28 crc kubenswrapper[4768]: E0223 18:36:28.715336 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70e35de6e549780a843571e5c79be0616ec62e51a36214a1fe097b171345481\": container with ID starting with d70e35de6e549780a843571e5c79be0616ec62e51a36214a1fe097b171345481 not found: ID does not exist" containerID="d70e35de6e549780a843571e5c79be0616ec62e51a36214a1fe097b171345481" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.715381 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70e35de6e549780a843571e5c79be0616ec62e51a36214a1fe097b171345481"} err="failed to get container status \"d70e35de6e549780a843571e5c79be0616ec62e51a36214a1fe097b171345481\": rpc error: code = NotFound desc = could not find container \"d70e35de6e549780a843571e5c79be0616ec62e51a36214a1fe097b171345481\": container with ID starting with d70e35de6e549780a843571e5c79be0616ec62e51a36214a1fe097b171345481 not found: ID does not exist" Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.878181 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 23 18:36:28 crc kubenswrapper[4768]: W0223 18:36:28.890169 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod326c2082_9a4f_4f3e_8baf_fdbf8ff864f0.slice/crio-6da757c2ef8baf6a8117ce121a4a838b79e0dfe8e1ec25266b67c8cba1e071c4 WatchSource:0}: Error finding container 6da757c2ef8baf6a8117ce121a4a838b79e0dfe8e1ec25266b67c8cba1e071c4: Status 404 returned error can't find the container with id 6da757c2ef8baf6a8117ce121a4a838b79e0dfe8e1ec25266b67c8cba1e071c4 Feb 23 18:36:28 crc kubenswrapper[4768]: I0223 18:36:28.936193 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8fb885f79-rxl5w"] Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.104943 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92"] Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.199552 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:29 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:29 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:29 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.200189 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.243505 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b6bbm"] Feb 23 18:36:29 crc kubenswrapper[4768]: W0223 18:36:29.252806 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ab70e0f_e340_48a0_8c0b_af46ee8748ad.slice/crio-f54f3e169127d98e65abb9642d51510659d13bb439f6a7e85278f2243ed7d144 WatchSource:0}: Error finding container f54f3e169127d98e65abb9642d51510659d13bb439f6a7e85278f2243ed7d144: Status 404 returned error can't find the container with id f54f3e169127d98e65abb9642d51510659d13bb439f6a7e85278f2243ed7d144 Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.316544 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.317768 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9" path="/var/lib/kubelet/pods/9dde0802-1a1a-4855-a1a8-bd0ed1cc39c9/volumes" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.318528 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c36b0ac3-8286-4df2-87cc-afc1edd2a19b" path="/var/lib/kubelet/pods/c36b0ac3-8286-4df2-87cc-afc1edd2a19b/volumes" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.631914 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" event={"ID":"0142e5c4-e4b5-458f-9b5e-59458769788c","Type":"ContainerStarted","Data":"afeb80715d3118076c0f90493e5fd72f8c64e30271ea407b7ecb1d5ec9d772d5"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.631975 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" event={"ID":"0142e5c4-e4b5-458f-9b5e-59458769788c","Type":"ContainerStarted","Data":"ff38c6d8afb2de2f2257f3bf88fe1a16befb4d6e45bd8c9a884953807d98e682"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.631995 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.646507 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0","Type":"ContainerStarted","Data":"1955c03ffe5f906099a159806bdc92565805f02f187c1bb79bb448b937ecc221"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.646581 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0","Type":"ContainerStarted","Data":"6da757c2ef8baf6a8117ce121a4a838b79e0dfe8e1ec25266b67c8cba1e071c4"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.650915 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9s8hm" event={"ID":"1bcfbee2-d95a-4f58-b436-5233d3691ee8","Type":"ContainerStarted","Data":"f2bc2e40b4378794b00bc8e3dfeee58d5e4dd4a5f7a20ba419e26dd34db264c3"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.659193 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" podStartSLOduration=3.6591711460000003 podStartE2EDuration="3.659171146s" podCreationTimestamp="2026-02-23 18:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:29.65751363 +0000 UTC m=+185.047999430" watchObservedRunningTime="2026-02-23 18:36:29.659171146 +0000 UTC m=+185.049656946" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.660664 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" event={"ID":"bee9fc28-f46f-41fe-86e9-b14cdead9120","Type":"ContainerStarted","Data":"dc0fd87c76df3f0965b5a2e11f81c1b8173c40630c9e8f9e0404e8b1c2f60207"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.660798 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" event={"ID":"bee9fc28-f46f-41fe-86e9-b14cdead9120","Type":"ContainerStarted","Data":"5562d0898e94b52827ef30f29c946338f72348439a240d628de5890708f32857"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.673425 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.673760 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.685798 4768 generic.go:334] "Generic (PLEG): container finished" podID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" containerID="5bfdc0f02dcbb612a36e821cf1d71f6408f7e64334a1e1d325bc66473c3392ec" exitCode=0 Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.686976 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvspb" event={"ID":"6ac0a902-00ab-4ec3-8284-06d478d2c4eb","Type":"ContainerDied","Data":"5bfdc0f02dcbb612a36e821cf1d71f6408f7e64334a1e1d325bc66473c3392ec"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.687012 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvspb" event={"ID":"6ac0a902-00ab-4ec3-8284-06d478d2c4eb","Type":"ContainerStarted","Data":"71af6cb73d94865c52c329180d429e6c7294e1bc0f31a8537a6740d22d3dbf49"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.689776 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.707171 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.707152945 podStartE2EDuration="1.707152945s" podCreationTimestamp="2026-02-23 18:36:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:29.702936859 +0000 UTC m=+185.093422669" watchObservedRunningTime="2026-02-23 18:36:29.707152945 +0000 UTC m=+185.097638745" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.708456 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-9s8hm" podStartSLOduration=124.708452121 podStartE2EDuration="2m4.708452121s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:29.685573181 +0000 UTC m=+185.076058981" watchObservedRunningTime="2026-02-23 18:36:29.708452121 +0000 UTC m=+185.098937921" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.709712 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" event={"ID":"01af9d70-86d3-4601-a000-1344c4752671","Type":"ContainerStarted","Data":"d0732a992483497898f300d86cdf8a50af1ad8383600ef6403a86dedc4b01015"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.709744 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" event={"ID":"01af9d70-86d3-4601-a000-1344c4752671","Type":"ContainerStarted","Data":"816b3a62b63675fdc2aeca677951b0b50f84a3b29e50507f8858f52677b2cf23"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.710608 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.716361 4768 generic.go:334] "Generic (PLEG): container finished" podID="09dd656a-5018-48c3-b1ca-0318e0de4161" containerID="b7e91aeb5088931f033d7a0735392b3ca7545d59375e8a128af918e13cce9300" exitCode=0 Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.716447 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtmth" event={"ID":"09dd656a-5018-48c3-b1ca-0318e0de4161","Type":"ContainerDied","Data":"b7e91aeb5088931f033d7a0735392b3ca7545d59375e8a128af918e13cce9300"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.719749 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.723018 4768 generic.go:334] "Generic (PLEG): container finished" podID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" containerID="7b1d231bf75bb7bc7a28fb24f1c3e12c8b33f89e94901c134c10bdca3d9acf0e" exitCode=0 Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.723088 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6bbm" event={"ID":"4ab70e0f-e340-48a0-8c0b-af46ee8748ad","Type":"ContainerDied","Data":"7b1d231bf75bb7bc7a28fb24f1c3e12c8b33f89e94901c134c10bdca3d9acf0e"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.723118 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6bbm" event={"ID":"4ab70e0f-e340-48a0-8c0b-af46ee8748ad","Type":"ContainerStarted","Data":"f54f3e169127d98e65abb9642d51510659d13bb439f6a7e85278f2243ed7d144"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.733689 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed08d934-3f52-47e6-89a0-16d5481ac4bd" containerID="265cf0320f1c8a4215f9a4496eca215206ec7ae759aadcd55e279e5331b76323" exitCode=0 Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.735085 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jslxc" event={"ID":"ed08d934-3f52-47e6-89a0-16d5481ac4bd","Type":"ContainerDied","Data":"265cf0320f1c8a4215f9a4496eca215206ec7ae759aadcd55e279e5331b76323"} Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.740308 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jt29q"] Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.741659 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.771964 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.774927 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt29q"] Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.782896 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" podStartSLOduration=124.782879617 podStartE2EDuration="2m4.782879617s" podCreationTimestamp="2026-02-23 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:29.780014958 +0000 UTC m=+185.170500758" watchObservedRunningTime="2026-02-23 18:36:29.782879617 +0000 UTC m=+185.173365417" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.876375 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f0d482-3b79-4272-bd0a-976fd8053576-utilities\") pod \"redhat-marketplace-jt29q\" (UID: \"b1f0d482-3b79-4272-bd0a-976fd8053576\") " pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.876427 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl45n\" (UniqueName: \"kubernetes.io/projected/b1f0d482-3b79-4272-bd0a-976fd8053576-kube-api-access-nl45n\") pod \"redhat-marketplace-jt29q\" (UID: \"b1f0d482-3b79-4272-bd0a-976fd8053576\") " pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.876469 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f0d482-3b79-4272-bd0a-976fd8053576-catalog-content\") pod \"redhat-marketplace-jt29q\" (UID: \"b1f0d482-3b79-4272-bd0a-976fd8053576\") " pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.940699 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" podStartSLOduration=3.940674855 podStartE2EDuration="3.940674855s" podCreationTimestamp="2026-02-23 18:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:29.899806652 +0000 UTC m=+185.290292452" watchObservedRunningTime="2026-02-23 18:36:29.940674855 +0000 UTC m=+185.331160655" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.983210 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f0d482-3b79-4272-bd0a-976fd8053576-utilities\") pod \"redhat-marketplace-jt29q\" (UID: \"b1f0d482-3b79-4272-bd0a-976fd8053576\") " pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.983312 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl45n\" (UniqueName: \"kubernetes.io/projected/b1f0d482-3b79-4272-bd0a-976fd8053576-kube-api-access-nl45n\") pod \"redhat-marketplace-jt29q\" (UID: \"b1f0d482-3b79-4272-bd0a-976fd8053576\") " pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.983368 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f0d482-3b79-4272-bd0a-976fd8053576-catalog-content\") pod \"redhat-marketplace-jt29q\" (UID: \"b1f0d482-3b79-4272-bd0a-976fd8053576\") " pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.983910 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f0d482-3b79-4272-bd0a-976fd8053576-catalog-content\") pod \"redhat-marketplace-jt29q\" (UID: \"b1f0d482-3b79-4272-bd0a-976fd8053576\") " pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:36:29 crc kubenswrapper[4768]: I0223 18:36:29.984157 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f0d482-3b79-4272-bd0a-976fd8053576-utilities\") pod \"redhat-marketplace-jt29q\" (UID: \"b1f0d482-3b79-4272-bd0a-976fd8053576\") " pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.036932 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl45n\" (UniqueName: \"kubernetes.io/projected/b1f0d482-3b79-4272-bd0a-976fd8053576-kube-api-access-nl45n\") pod \"redhat-marketplace-jt29q\" (UID: \"b1f0d482-3b79-4272-bd0a-976fd8053576\") " pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.063750 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.135546 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rwjrq"] Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.136886 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.154136 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.154471 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwjrq"] Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.196972 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:30 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:30 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:30 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.197032 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.288038 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbed104c-291d-45f5-b41d-99814829422e-config-volume\") pod \"dbed104c-291d-45f5-b41d-99814829422e\" (UID: \"dbed104c-291d-45f5-b41d-99814829422e\") " Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.288439 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbed104c-291d-45f5-b41d-99814829422e-secret-volume\") pod \"dbed104c-291d-45f5-b41d-99814829422e\" (UID: \"dbed104c-291d-45f5-b41d-99814829422e\") " Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.288509 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kcjj\" (UniqueName: \"kubernetes.io/projected/dbed104c-291d-45f5-b41d-99814829422e-kube-api-access-5kcjj\") pod \"dbed104c-291d-45f5-b41d-99814829422e\" (UID: \"dbed104c-291d-45f5-b41d-99814829422e\") " Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.288717 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900ac8ce-2407-49c9-991f-568685b4f3e5-catalog-content\") pod \"redhat-marketplace-rwjrq\" (UID: \"900ac8ce-2407-49c9-991f-568685b4f3e5\") " pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.288742 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j444v\" (UniqueName: \"kubernetes.io/projected/900ac8ce-2407-49c9-991f-568685b4f3e5-kube-api-access-j444v\") pod \"redhat-marketplace-rwjrq\" (UID: \"900ac8ce-2407-49c9-991f-568685b4f3e5\") " pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.288770 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900ac8ce-2407-49c9-991f-568685b4f3e5-utilities\") pod \"redhat-marketplace-rwjrq\" (UID: \"900ac8ce-2407-49c9-991f-568685b4f3e5\") " pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.290603 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbed104c-291d-45f5-b41d-99814829422e-config-volume" (OuterVolumeSpecName: "config-volume") pod "dbed104c-291d-45f5-b41d-99814829422e" (UID: "dbed104c-291d-45f5-b41d-99814829422e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.297182 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbed104c-291d-45f5-b41d-99814829422e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dbed104c-291d-45f5-b41d-99814829422e" (UID: "dbed104c-291d-45f5-b41d-99814829422e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.312924 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbed104c-291d-45f5-b41d-99814829422e-kube-api-access-5kcjj" (OuterVolumeSpecName: "kube-api-access-5kcjj") pod "dbed104c-291d-45f5-b41d-99814829422e" (UID: "dbed104c-291d-45f5-b41d-99814829422e"). InnerVolumeSpecName "kube-api-access-5kcjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.393325 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900ac8ce-2407-49c9-991f-568685b4f3e5-catalog-content\") pod \"redhat-marketplace-rwjrq\" (UID: \"900ac8ce-2407-49c9-991f-568685b4f3e5\") " pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.393389 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j444v\" (UniqueName: \"kubernetes.io/projected/900ac8ce-2407-49c9-991f-568685b4f3e5-kube-api-access-j444v\") pod \"redhat-marketplace-rwjrq\" (UID: \"900ac8ce-2407-49c9-991f-568685b4f3e5\") " pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.393415 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900ac8ce-2407-49c9-991f-568685b4f3e5-utilities\") pod \"redhat-marketplace-rwjrq\" (UID: \"900ac8ce-2407-49c9-991f-568685b4f3e5\") " pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.393510 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kcjj\" (UniqueName: \"kubernetes.io/projected/dbed104c-291d-45f5-b41d-99814829422e-kube-api-access-5kcjj\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.393522 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dbed104c-291d-45f5-b41d-99814829422e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.393534 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dbed104c-291d-45f5-b41d-99814829422e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.394040 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900ac8ce-2407-49c9-991f-568685b4f3e5-utilities\") pod \"redhat-marketplace-rwjrq\" (UID: \"900ac8ce-2407-49c9-991f-568685b4f3e5\") " pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.394286 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900ac8ce-2407-49c9-991f-568685b4f3e5-catalog-content\") pod \"redhat-marketplace-rwjrq\" (UID: \"900ac8ce-2407-49c9-991f-568685b4f3e5\") " pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.435208 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j444v\" (UniqueName: \"kubernetes.io/projected/900ac8ce-2407-49c9-991f-568685b4f3e5-kube-api-access-j444v\") pod \"redhat-marketplace-rwjrq\" (UID: \"900ac8ce-2407-49c9-991f-568685b4f3e5\") " pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.459967 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-mz6w6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.460025 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mz6w6" podUID="26f1fca3-79fa-4717-8b2b-dbdad99057cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.460113 4768 patch_prober.go:28] interesting pod/downloads-7954f5f757-mz6w6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.460196 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mz6w6" podUID="26f1fca3-79fa-4717-8b2b-dbdad99057cc" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.24:8080/\": dial tcp 10.217.0.24:8080: connect: connection refused" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.462605 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.619906 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 18:36:30 crc kubenswrapper[4768]: E0223 18:36:30.620114 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbed104c-291d-45f5-b41d-99814829422e" containerName="collect-profiles" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.620126 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbed104c-291d-45f5-b41d-99814829422e" containerName="collect-profiles" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.620215 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbed104c-291d-45f5-b41d-99814829422e" containerName="collect-profiles" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.620591 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.625782 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.625950 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.644890 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.671906 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt29q"] Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.701195 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bc222bc-ef64-4a89-923a-b7e54666e246-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1bc222bc-ef64-4a89-923a-b7e54666e246\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.701341 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bc222bc-ef64-4a89-923a-b7e54666e246-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1bc222bc-ef64-4a89-923a-b7e54666e246\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.728863 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-52sbl"] Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.730108 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.740138 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.748866 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-52sbl"] Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.778031 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" event={"ID":"dbed104c-291d-45f5-b41d-99814829422e","Type":"ContainerDied","Data":"a0a9b40c3ddec23e2101d3aecbb79298195c2d4ca7330534df86ff57563493b1"} Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.778539 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0a9b40c3ddec23e2101d3aecbb79298195c2d4ca7330534df86ff57563493b1" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.778077 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.780791 4768 generic.go:334] "Generic (PLEG): container finished" podID="326c2082-9a4f-4f3e-8baf-fdbf8ff864f0" containerID="1955c03ffe5f906099a159806bdc92565805f02f187c1bb79bb448b937ecc221" exitCode=0 Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.780889 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0","Type":"ContainerDied","Data":"1955c03ffe5f906099a159806bdc92565805f02f187c1bb79bb448b937ecc221"} Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.806863 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrfhs\" (UniqueName: \"kubernetes.io/projected/80aa487f-1e02-4a14-88da-a96a5f2a8f07-kube-api-access-rrfhs\") pod \"redhat-operators-52sbl\" (UID: \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\") " pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.806906 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80aa487f-1e02-4a14-88da-a96a5f2a8f07-utilities\") pod \"redhat-operators-52sbl\" (UID: \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\") " pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.806950 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bc222bc-ef64-4a89-923a-b7e54666e246-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1bc222bc-ef64-4a89-923a-b7e54666e246\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.806984 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80aa487f-1e02-4a14-88da-a96a5f2a8f07-catalog-content\") pod \"redhat-operators-52sbl\" (UID: \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\") " pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.807041 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bc222bc-ef64-4a89-923a-b7e54666e246-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1bc222bc-ef64-4a89-923a-b7e54666e246\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.807105 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bc222bc-ef64-4a89-923a-b7e54666e246-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1bc222bc-ef64-4a89-923a-b7e54666e246\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.812815 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt29q" event={"ID":"b1f0d482-3b79-4272-bd0a-976fd8053576","Type":"ContainerStarted","Data":"21ae9a21cc40b976c0d44b718423082d70c741cc72f1470df632e528fa926484"} Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.864349 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.864400 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.868975 4768 patch_prober.go:28] interesting pod/console-f9d7485db-v9856 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.869532 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bc222bc-ef64-4a89-923a-b7e54666e246-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1bc222bc-ef64-4a89-923a-b7e54666e246\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.869039 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-v9856" podUID="c269d15e-90d0-47d8-b2bd-f5785fa1a69b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.908139 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrfhs\" (UniqueName: \"kubernetes.io/projected/80aa487f-1e02-4a14-88da-a96a5f2a8f07-kube-api-access-rrfhs\") pod \"redhat-operators-52sbl\" (UID: \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\") " pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.908200 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80aa487f-1e02-4a14-88da-a96a5f2a8f07-utilities\") pod \"redhat-operators-52sbl\" (UID: \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\") " pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.908554 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80aa487f-1e02-4a14-88da-a96a5f2a8f07-catalog-content\") pod \"redhat-operators-52sbl\" (UID: \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\") " pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.912403 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80aa487f-1e02-4a14-88da-a96a5f2a8f07-utilities\") pod \"redhat-operators-52sbl\" (UID: \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\") " pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.912807 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80aa487f-1e02-4a14-88da-a96a5f2a8f07-catalog-content\") pod \"redhat-operators-52sbl\" (UID: \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\") " pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.948270 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 18:36:30 crc kubenswrapper[4768]: I0223 18:36:30.952814 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrfhs\" (UniqueName: \"kubernetes.io/projected/80aa487f-1e02-4a14-88da-a96a5f2a8f07-kube-api-access-rrfhs\") pod \"redhat-operators-52sbl\" (UID: \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\") " pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.077107 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.121755 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6lrng"] Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.122936 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.134440 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lrng"] Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.194704 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.212096 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1272d613-92f7-455a-80ec-00ed65aa20b9-utilities\") pod \"redhat-operators-6lrng\" (UID: \"1272d613-92f7-455a-80ec-00ed65aa20b9\") " pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.212620 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvmrf\" (UniqueName: \"kubernetes.io/projected/1272d613-92f7-455a-80ec-00ed65aa20b9-kube-api-access-fvmrf\") pod \"redhat-operators-6lrng\" (UID: \"1272d613-92f7-455a-80ec-00ed65aa20b9\") " pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.212673 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1272d613-92f7-455a-80ec-00ed65aa20b9-catalog-content\") pod \"redhat-operators-6lrng\" (UID: \"1272d613-92f7-455a-80ec-00ed65aa20b9\") " pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.212086 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:31 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:31 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:31 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.212766 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.214096 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwjrq"] Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.314502 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvmrf\" (UniqueName: \"kubernetes.io/projected/1272d613-92f7-455a-80ec-00ed65aa20b9-kube-api-access-fvmrf\") pod \"redhat-operators-6lrng\" (UID: \"1272d613-92f7-455a-80ec-00ed65aa20b9\") " pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.314578 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1272d613-92f7-455a-80ec-00ed65aa20b9-catalog-content\") pod \"redhat-operators-6lrng\" (UID: \"1272d613-92f7-455a-80ec-00ed65aa20b9\") " pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.314657 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1272d613-92f7-455a-80ec-00ed65aa20b9-utilities\") pod \"redhat-operators-6lrng\" (UID: \"1272d613-92f7-455a-80ec-00ed65aa20b9\") " pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.315712 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1272d613-92f7-455a-80ec-00ed65aa20b9-catalog-content\") pod \"redhat-operators-6lrng\" (UID: \"1272d613-92f7-455a-80ec-00ed65aa20b9\") " pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.315919 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1272d613-92f7-455a-80ec-00ed65aa20b9-utilities\") pod \"redhat-operators-6lrng\" (UID: \"1272d613-92f7-455a-80ec-00ed65aa20b9\") " pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.386773 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvmrf\" (UniqueName: \"kubernetes.io/projected/1272d613-92f7-455a-80ec-00ed65aa20b9-kube-api-access-fvmrf\") pod \"redhat-operators-6lrng\" (UID: \"1272d613-92f7-455a-80ec-00ed65aa20b9\") " pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.435894 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.500375 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-52sbl"] Feb 23 18:36:31 crc kubenswrapper[4768]: W0223 18:36:31.511547 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1bc222bc_ef64_4a89_923a_b7e54666e246.slice/crio-5924c2af0541d7f676eba4084e8710dc0eb734bee59e50851269dadca04158dd WatchSource:0}: Error finding container 5924c2af0541d7f676eba4084e8710dc0eb734bee59e50851269dadca04158dd: Status 404 returned error can't find the container with id 5924c2af0541d7f676eba4084e8710dc0eb734bee59e50851269dadca04158dd Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.511695 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:36:31 crc kubenswrapper[4768]: W0223 18:36:31.513939 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80aa487f_1e02_4a14_88da_a96a5f2a8f07.slice/crio-fcd8b9884f72738fc4bb598c9b32d915630a55419b0bfd7a5895daec6e2a43c5 WatchSource:0}: Error finding container fcd8b9884f72738fc4bb598c9b32d915630a55419b0bfd7a5895daec6e2a43c5: Status 404 returned error can't find the container with id fcd8b9884f72738fc4bb598c9b32d915630a55419b0bfd7a5895daec6e2a43c5 Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.862640 4768 generic.go:334] "Generic (PLEG): container finished" podID="900ac8ce-2407-49c9-991f-568685b4f3e5" containerID="2e988bf9132d5eb74bfc7f4ff5b83a18374352255cd47201982b68cfcfecae94" exitCode=0 Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.863149 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwjrq" event={"ID":"900ac8ce-2407-49c9-991f-568685b4f3e5","Type":"ContainerDied","Data":"2e988bf9132d5eb74bfc7f4ff5b83a18374352255cd47201982b68cfcfecae94"} Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.863191 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwjrq" event={"ID":"900ac8ce-2407-49c9-991f-568685b4f3e5","Type":"ContainerStarted","Data":"9e486f9739b35a4dd54cede55084420c96f92fea0b929759135ba20ad32c6a18"} Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.868900 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.887952 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52sbl" event={"ID":"80aa487f-1e02-4a14-88da-a96a5f2a8f07","Type":"ContainerStarted","Data":"fcd8b9884f72738fc4bb598c9b32d915630a55419b0bfd7a5895daec6e2a43c5"} Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.903806 4768 generic.go:334] "Generic (PLEG): container finished" podID="b1f0d482-3b79-4272-bd0a-976fd8053576" containerID="2a4f22896d95b21eee56e681ae06f68f7882643f88db06525007abedde741fe0" exitCode=0 Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.904118 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt29q" event={"ID":"b1f0d482-3b79-4272-bd0a-976fd8053576","Type":"ContainerDied","Data":"2a4f22896d95b21eee56e681ae06f68f7882643f88db06525007abedde741fe0"} Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.914327 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.924880 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-x58kc" Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.927016 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6lrng"] Feb 23 18:36:31 crc kubenswrapper[4768]: I0223 18:36:31.938424 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1bc222bc-ef64-4a89-923a-b7e54666e246","Type":"ContainerStarted","Data":"5924c2af0541d7f676eba4084e8710dc0eb734bee59e50851269dadca04158dd"} Feb 23 18:36:31 crc kubenswrapper[4768]: W0223 18:36:31.943171 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1272d613_92f7_455a_80ec_00ed65aa20b9.slice/crio-8611244b3fddd09a7e2508658b7149783db1c0091574c5c435e0a5b912f0a7ba WatchSource:0}: Error finding container 8611244b3fddd09a7e2508658b7149783db1c0091574c5c435e0a5b912f0a7ba: Status 404 returned error can't find the container with id 8611244b3fddd09a7e2508658b7149783db1c0091574c5c435e0a5b912f0a7ba Feb 23 18:36:32 crc kubenswrapper[4768]: I0223 18:36:32.218292 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:32 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:32 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:32 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:32 crc kubenswrapper[4768]: I0223 18:36:32.218351 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:32 crc kubenswrapper[4768]: I0223 18:36:32.366555 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 18:36:32 crc kubenswrapper[4768]: I0223 18:36:32.543684 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/326c2082-9a4f-4f3e-8baf-fdbf8ff864f0-kube-api-access\") pod \"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0\" (UID: \"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0\") " Feb 23 18:36:32 crc kubenswrapper[4768]: I0223 18:36:32.543824 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/326c2082-9a4f-4f3e-8baf-fdbf8ff864f0-kubelet-dir\") pod \"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0\" (UID: \"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0\") " Feb 23 18:36:32 crc kubenswrapper[4768]: I0223 18:36:32.544879 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/326c2082-9a4f-4f3e-8baf-fdbf8ff864f0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "326c2082-9a4f-4f3e-8baf-fdbf8ff864f0" (UID: "326c2082-9a4f-4f3e-8baf-fdbf8ff864f0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:36:32 crc kubenswrapper[4768]: I0223 18:36:32.552677 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/326c2082-9a4f-4f3e-8baf-fdbf8ff864f0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "326c2082-9a4f-4f3e-8baf-fdbf8ff864f0" (UID: "326c2082-9a4f-4f3e-8baf-fdbf8ff864f0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:36:32 crc kubenswrapper[4768]: I0223 18:36:32.646565 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/326c2082-9a4f-4f3e-8baf-fdbf8ff864f0-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:32 crc kubenswrapper[4768]: I0223 18:36:32.646619 4768 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/326c2082-9a4f-4f3e-8baf-fdbf8ff864f0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:32 crc kubenswrapper[4768]: I0223 18:36:32.976351 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"326c2082-9a4f-4f3e-8baf-fdbf8ff864f0","Type":"ContainerDied","Data":"6da757c2ef8baf6a8117ce121a4a838b79e0dfe8e1ec25266b67c8cba1e071c4"} Feb 23 18:36:32 crc kubenswrapper[4768]: I0223 18:36:32.976395 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6da757c2ef8baf6a8117ce121a4a838b79e0dfe8e1ec25266b67c8cba1e071c4" Feb 23 18:36:32 crc kubenswrapper[4768]: I0223 18:36:32.976452 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 23 18:36:33 crc kubenswrapper[4768]: I0223 18:36:33.000066 4768 generic.go:334] "Generic (PLEG): container finished" podID="1272d613-92f7-455a-80ec-00ed65aa20b9" containerID="aea322ec6cd9015151e9cadec0d6ce6a7f75300ff9a99f2e6fd40976de64b302" exitCode=0 Feb 23 18:36:33 crc kubenswrapper[4768]: I0223 18:36:33.000133 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lrng" event={"ID":"1272d613-92f7-455a-80ec-00ed65aa20b9","Type":"ContainerDied","Data":"aea322ec6cd9015151e9cadec0d6ce6a7f75300ff9a99f2e6fd40976de64b302"} Feb 23 18:36:33 crc kubenswrapper[4768]: I0223 18:36:33.000162 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lrng" event={"ID":"1272d613-92f7-455a-80ec-00ed65aa20b9","Type":"ContainerStarted","Data":"8611244b3fddd09a7e2508658b7149783db1c0091574c5c435e0a5b912f0a7ba"} Feb 23 18:36:33 crc kubenswrapper[4768]: I0223 18:36:33.013550 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1bc222bc-ef64-4a89-923a-b7e54666e246","Type":"ContainerStarted","Data":"7e9d959655700603a764d9dd7375613deae57f9648617a48eda88a99d75c33f4"} Feb 23 18:36:33 crc kubenswrapper[4768]: I0223 18:36:33.016684 4768 generic.go:334] "Generic (PLEG): container finished" podID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" containerID="a8782ad90eb6f26623684d02e190cd5aa36f0be9ae2de8cb4448c1a2cabea3ab" exitCode=0 Feb 23 18:36:33 crc kubenswrapper[4768]: I0223 18:36:33.016887 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52sbl" event={"ID":"80aa487f-1e02-4a14-88da-a96a5f2a8f07","Type":"ContainerDied","Data":"a8782ad90eb6f26623684d02e190cd5aa36f0be9ae2de8cb4448c1a2cabea3ab"} Feb 23 18:36:33 crc kubenswrapper[4768]: I0223 18:36:33.058729 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.058709671 podStartE2EDuration="3.058709671s" podCreationTimestamp="2026-02-23 18:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:36:33.047435771 +0000 UTC m=+188.437921571" watchObservedRunningTime="2026-02-23 18:36:33.058709671 +0000 UTC m=+188.449195471" Feb 23 18:36:33 crc kubenswrapper[4768]: I0223 18:36:33.196069 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:33 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:33 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:33 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:33 crc kubenswrapper[4768]: I0223 18:36:33.196120 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:33 crc kubenswrapper[4768]: I0223 18:36:33.436003 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-94drr" Feb 23 18:36:34 crc kubenswrapper[4768]: I0223 18:36:34.063189 4768 generic.go:334] "Generic (PLEG): container finished" podID="1bc222bc-ef64-4a89-923a-b7e54666e246" containerID="7e9d959655700603a764d9dd7375613deae57f9648617a48eda88a99d75c33f4" exitCode=0 Feb 23 18:36:34 crc kubenswrapper[4768]: I0223 18:36:34.063942 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1bc222bc-ef64-4a89-923a-b7e54666e246","Type":"ContainerDied","Data":"7e9d959655700603a764d9dd7375613deae57f9648617a48eda88a99d75c33f4"} Feb 23 18:36:34 crc kubenswrapper[4768]: I0223 18:36:34.198655 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:34 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:34 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:34 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:34 crc kubenswrapper[4768]: I0223 18:36:34.198732 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:35 crc kubenswrapper[4768]: I0223 18:36:35.197529 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:35 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:35 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:35 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:35 crc kubenswrapper[4768]: I0223 18:36:35.197579 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:36 crc kubenswrapper[4768]: I0223 18:36:36.195983 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:36 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:36 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:36 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:36 crc kubenswrapper[4768]: I0223 18:36:36.196379 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:37 crc kubenswrapper[4768]: I0223 18:36:37.195227 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:37 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:37 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:37 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:37 crc kubenswrapper[4768]: I0223 18:36:37.195360 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:38 crc kubenswrapper[4768]: I0223 18:36:38.195961 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:38 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:38 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:38 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:38 crc kubenswrapper[4768]: I0223 18:36:38.196039 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:39 crc kubenswrapper[4768]: I0223 18:36:39.195951 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:39 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:39 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:39 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:39 crc kubenswrapper[4768]: I0223 18:36:39.196325 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:39 crc kubenswrapper[4768]: I0223 18:36:39.924663 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:36:40 crc kubenswrapper[4768]: I0223 18:36:40.201991 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:40 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:40 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:40 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:40 crc kubenswrapper[4768]: I0223 18:36:40.202084 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:40 crc kubenswrapper[4768]: I0223 18:36:40.465611 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mz6w6" Feb 23 18:36:40 crc kubenswrapper[4768]: I0223 18:36:40.861330 4768 patch_prober.go:28] interesting pod/console-f9d7485db-v9856 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Feb 23 18:36:40 crc kubenswrapper[4768]: I0223 18:36:40.861382 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-v9856" podUID="c269d15e-90d0-47d8-b2bd-f5785fa1a69b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.26:8443/health\": dial tcp 10.217.0.26:8443: connect: connection refused" Feb 23 18:36:41 crc kubenswrapper[4768]: I0223 18:36:41.195447 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:41 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:41 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:41 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:41 crc kubenswrapper[4768]: I0223 18:36:41.195523 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:42 crc kubenswrapper[4768]: I0223 18:36:42.195753 4768 patch_prober.go:28] interesting pod/router-default-5444994796-nnn8b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 23 18:36:42 crc kubenswrapper[4768]: [-]has-synced failed: reason withheld Feb 23 18:36:42 crc kubenswrapper[4768]: [+]process-running ok Feb 23 18:36:42 crc kubenswrapper[4768]: healthz check failed Feb 23 18:36:42 crc kubenswrapper[4768]: I0223 18:36:42.196033 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nnn8b" podUID="51c3071b-8dc3-402a-8f3e-a89fa71f4a54" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 23 18:36:42 crc kubenswrapper[4768]: I0223 18:36:42.436572 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 18:36:42 crc kubenswrapper[4768]: I0223 18:36:42.530459 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bc222bc-ef64-4a89-923a-b7e54666e246-kube-api-access\") pod \"1bc222bc-ef64-4a89-923a-b7e54666e246\" (UID: \"1bc222bc-ef64-4a89-923a-b7e54666e246\") " Feb 23 18:36:42 crc kubenswrapper[4768]: I0223 18:36:42.530587 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bc222bc-ef64-4a89-923a-b7e54666e246-kubelet-dir\") pod \"1bc222bc-ef64-4a89-923a-b7e54666e246\" (UID: \"1bc222bc-ef64-4a89-923a-b7e54666e246\") " Feb 23 18:36:42 crc kubenswrapper[4768]: I0223 18:36:42.530809 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1bc222bc-ef64-4a89-923a-b7e54666e246-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1bc222bc-ef64-4a89-923a-b7e54666e246" (UID: "1bc222bc-ef64-4a89-923a-b7e54666e246"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:36:42 crc kubenswrapper[4768]: I0223 18:36:42.532230 4768 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1bc222bc-ef64-4a89-923a-b7e54666e246-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:42 crc kubenswrapper[4768]: I0223 18:36:42.539571 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bc222bc-ef64-4a89-923a-b7e54666e246-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1bc222bc-ef64-4a89-923a-b7e54666e246" (UID: "1bc222bc-ef64-4a89-923a-b7e54666e246"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:36:42 crc kubenswrapper[4768]: I0223 18:36:42.633098 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1bc222bc-ef64-4a89-923a-b7e54666e246-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 18:36:43 crc kubenswrapper[4768]: I0223 18:36:43.163990 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1bc222bc-ef64-4a89-923a-b7e54666e246","Type":"ContainerDied","Data":"5924c2af0541d7f676eba4084e8710dc0eb734bee59e50851269dadca04158dd"} Feb 23 18:36:43 crc kubenswrapper[4768]: I0223 18:36:43.164040 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5924c2af0541d7f676eba4084e8710dc0eb734bee59e50851269dadca04158dd" Feb 23 18:36:43 crc kubenswrapper[4768]: I0223 18:36:43.164124 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 23 18:36:43 crc kubenswrapper[4768]: I0223 18:36:43.196338 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:43 crc kubenswrapper[4768]: I0223 18:36:43.199527 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nnn8b" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.267071 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.267684 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.269465 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.269884 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.279597 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.292777 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.370121 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.370327 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.372150 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.382542 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.395779 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.403786 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.529947 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.551344 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:36:44 crc kubenswrapper[4768]: I0223 18:36:44.636744 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 23 18:36:48 crc kubenswrapper[4768]: I0223 18:36:48.315884 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:36:50 crc kubenswrapper[4768]: I0223 18:36:50.868988 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:50 crc kubenswrapper[4768]: I0223 18:36:50.872787 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:36:55 crc kubenswrapper[4768]: E0223 18:36:55.352370 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 23 18:36:55 crc kubenswrapper[4768]: E0223 18:36:55.352810 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhznd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gvspb_openshift-marketplace(6ac0a902-00ab-4ec3-8284-06d478d2c4eb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 18:36:55 crc kubenswrapper[4768]: E0223 18:36:55.354193 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gvspb" podUID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" Feb 23 18:36:55 crc kubenswrapper[4768]: E0223 18:36:55.387153 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 23 18:36:55 crc kubenswrapper[4768]: E0223 18:36:55.387364 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v25jn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-b6bbm_openshift-marketplace(4ab70e0f-e340-48a0-8c0b-af46ee8748ad): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 18:36:55 crc kubenswrapper[4768]: E0223 18:36:55.389799 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-b6bbm" podUID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" Feb 23 18:36:58 crc kubenswrapper[4768]: E0223 18:36:58.795043 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gvspb" podUID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" Feb 23 18:36:58 crc kubenswrapper[4768]: E0223 18:36:58.795887 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-b6bbm" podUID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" Feb 23 18:36:58 crc kubenswrapper[4768]: E0223 18:36:58.871759 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 23 18:36:58 crc kubenswrapper[4768]: E0223 18:36:58.872191 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrfhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-52sbl_openshift-marketplace(80aa487f-1e02-4a14-88da-a96a5f2a8f07): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 18:36:58 crc kubenswrapper[4768]: E0223 18:36:58.873548 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-52sbl" podUID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" Feb 23 18:36:58 crc kubenswrapper[4768]: E0223 18:36:58.875522 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 23 18:36:58 crc kubenswrapper[4768]: E0223 18:36:58.875701 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvmrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6lrng_openshift-marketplace(1272d613-92f7-455a-80ec-00ed65aa20b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 18:36:58 crc kubenswrapper[4768]: E0223 18:36:58.877110 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6lrng" podUID="1272d613-92f7-455a-80ec-00ed65aa20b9" Feb 23 18:37:00 crc kubenswrapper[4768]: E0223 18:37:00.070091 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-52sbl" podUID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" Feb 23 18:37:00 crc kubenswrapper[4768]: E0223 18:37:00.070458 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6lrng" podUID="1272d613-92f7-455a-80ec-00ed65aa20b9" Feb 23 18:37:00 crc kubenswrapper[4768]: E0223 18:37:00.137839 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 23 18:37:00 crc kubenswrapper[4768]: E0223 18:37:00.138366 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j444v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rwjrq_openshift-marketplace(900ac8ce-2407-49c9-991f-568685b4f3e5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 18:37:00 crc kubenswrapper[4768]: E0223 18:37:00.139542 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rwjrq" podUID="900ac8ce-2407-49c9-991f-568685b4f3e5" Feb 23 18:37:00 crc kubenswrapper[4768]: E0223 18:37:00.191492 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 23 18:37:00 crc kubenswrapper[4768]: E0223 18:37:00.191645 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nl45n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jt29q_openshift-marketplace(b1f0d482-3b79-4272-bd0a-976fd8053576): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 23 18:37:00 crc kubenswrapper[4768]: E0223 18:37:00.192864 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jt29q" podUID="b1f0d482-3b79-4272-bd0a-976fd8053576" Feb 23 18:37:00 crc kubenswrapper[4768]: E0223 18:37:00.276360 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jt29q" podUID="b1f0d482-3b79-4272-bd0a-976fd8053576" Feb 23 18:37:00 crc kubenswrapper[4768]: E0223 18:37:00.276665 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rwjrq" podUID="900ac8ce-2407-49c9-991f-568685b4f3e5" Feb 23 18:37:00 crc kubenswrapper[4768]: W0223 18:37:00.618285 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-fe8992bf1881f0dd685d32f769ded2582ffab3bc2eb203bd80fe9dcbb63dd303 WatchSource:0}: Error finding container fe8992bf1881f0dd685d32f769ded2582ffab3bc2eb203bd80fe9dcbb63dd303: Status 404 returned error can't find the container with id fe8992bf1881f0dd685d32f769ded2582ffab3bc2eb203bd80fe9dcbb63dd303 Feb 23 18:37:01 crc kubenswrapper[4768]: I0223 18:37:01.283302 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9ab7e7f535d110fdc2179b9071c512f0acb5dee2d3823eda84d82885025b56e7"} Feb 23 18:37:01 crc kubenswrapper[4768]: I0223 18:37:01.284147 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4754ecf10d2a5fd7b8e1828fba6f1d3093e2cd0f770735dd0be0a71c7eff82cc"} Feb 23 18:37:01 crc kubenswrapper[4768]: I0223 18:37:01.287392 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed08d934-3f52-47e6-89a0-16d5481ac4bd" containerID="cf1eea9e60ddc50e6495b08d854cd64ba62973a609b2663f60d8afa3500f903f" exitCode=0 Feb 23 18:37:01 crc kubenswrapper[4768]: I0223 18:37:01.287532 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jslxc" event={"ID":"ed08d934-3f52-47e6-89a0-16d5481ac4bd","Type":"ContainerDied","Data":"cf1eea9e60ddc50e6495b08d854cd64ba62973a609b2663f60d8afa3500f903f"} Feb 23 18:37:01 crc kubenswrapper[4768]: I0223 18:37:01.290698 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"86aa571cfc6552f340477c300d39f9e34d7c7b26232ef140dedaf2e8629497e4"} Feb 23 18:37:01 crc kubenswrapper[4768]: I0223 18:37:01.290774 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fe8992bf1881f0dd685d32f769ded2582ffab3bc2eb203bd80fe9dcbb63dd303"} Feb 23 18:37:01 crc kubenswrapper[4768]: I0223 18:37:01.293603 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4ad166d85e7a0041fbe47cdf2062c7f3668bf4b41d6faa331005dfe69c6bca5c"} Feb 23 18:37:01 crc kubenswrapper[4768]: I0223 18:37:01.293742 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c9bdb81ca0287d806d949b2025db59d3c0529e64d0a87ed3be9f2b0bf3df4b50"} Feb 23 18:37:01 crc kubenswrapper[4768]: I0223 18:37:01.301376 4768 generic.go:334] "Generic (PLEG): container finished" podID="09dd656a-5018-48c3-b1ca-0318e0de4161" containerID="f9ba3ebaae18bd73add5b3d33bfd7e7d31d9b1774a9caa7b22ffcd53da3b68aa" exitCode=0 Feb 23 18:37:01 crc kubenswrapper[4768]: I0223 18:37:01.301458 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtmth" event={"ID":"09dd656a-5018-48c3-b1ca-0318e0de4161","Type":"ContainerDied","Data":"f9ba3ebaae18bd73add5b3d33bfd7e7d31d9b1774a9caa7b22ffcd53da3b68aa"} Feb 23 18:37:01 crc kubenswrapper[4768]: I0223 18:37:01.620569 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9blhj" Feb 23 18:37:02 crc kubenswrapper[4768]: I0223 18:37:02.310914 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtmth" event={"ID":"09dd656a-5018-48c3-b1ca-0318e0de4161","Type":"ContainerStarted","Data":"9a6d1d5770bad3e379ce87370a5731f36b74c7149d5ba5a19356c8f143e8eadf"} Feb 23 18:37:02 crc kubenswrapper[4768]: I0223 18:37:02.318678 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jslxc" event={"ID":"ed08d934-3f52-47e6-89a0-16d5481ac4bd","Type":"ContainerStarted","Data":"6ed8f4081a49b862b22aaaaeb6b455274586f8358d250694775d24abe1b40700"} Feb 23 18:37:02 crc kubenswrapper[4768]: I0223 18:37:02.318722 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:37:02 crc kubenswrapper[4768]: I0223 18:37:02.329693 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xtmth" podStartSLOduration=3.229062804 podStartE2EDuration="35.329666649s" podCreationTimestamp="2026-02-23 18:36:27 +0000 UTC" firstStartedPulling="2026-02-23 18:36:29.717962372 +0000 UTC m=+185.108448172" lastFinishedPulling="2026-02-23 18:37:01.818566217 +0000 UTC m=+217.209052017" observedRunningTime="2026-02-23 18:37:02.328995091 +0000 UTC m=+217.719480891" watchObservedRunningTime="2026-02-23 18:37:02.329666649 +0000 UTC m=+217.720152489" Feb 23 18:37:02 crc kubenswrapper[4768]: I0223 18:37:02.351188 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jslxc" podStartSLOduration=3.357670446 podStartE2EDuration="35.351170342s" podCreationTimestamp="2026-02-23 18:36:27 +0000 UTC" firstStartedPulling="2026-02-23 18:36:29.738895927 +0000 UTC m=+185.129381727" lastFinishedPulling="2026-02-23 18:37:01.732395823 +0000 UTC m=+217.122881623" observedRunningTime="2026-02-23 18:37:02.346154741 +0000 UTC m=+217.736640561" watchObservedRunningTime="2026-02-23 18:37:02.351170342 +0000 UTC m=+217.741656142" Feb 23 18:37:03 crc kubenswrapper[4768]: I0223 18:37:03.470987 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lsn69"] Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.398220 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 18:37:06 crc kubenswrapper[4768]: E0223 18:37:06.399232 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bc222bc-ef64-4a89-923a-b7e54666e246" containerName="pruner" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.399265 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bc222bc-ef64-4a89-923a-b7e54666e246" containerName="pruner" Feb 23 18:37:06 crc kubenswrapper[4768]: E0223 18:37:06.399276 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326c2082-9a4f-4f3e-8baf-fdbf8ff864f0" containerName="pruner" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.399282 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="326c2082-9a4f-4f3e-8baf-fdbf8ff864f0" containerName="pruner" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.399421 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bc222bc-ef64-4a89-923a-b7e54666e246" containerName="pruner" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.399440 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="326c2082-9a4f-4f3e-8baf-fdbf8ff864f0" containerName="pruner" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.399791 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.402577 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.402777 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.421402 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.532732 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec824dba-5920-4a44-9742-c014ae5f67f7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ec824dba-5920-4a44-9742-c014ae5f67f7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.533318 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec824dba-5920-4a44-9742-c014ae5f67f7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ec824dba-5920-4a44-9742-c014ae5f67f7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.634916 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec824dba-5920-4a44-9742-c014ae5f67f7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ec824dba-5920-4a44-9742-c014ae5f67f7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.634997 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec824dba-5920-4a44-9742-c014ae5f67f7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ec824dba-5920-4a44-9742-c014ae5f67f7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.635172 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec824dba-5920-4a44-9742-c014ae5f67f7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"ec824dba-5920-4a44-9742-c014ae5f67f7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.654096 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec824dba-5920-4a44-9742-c014ae5f67f7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"ec824dba-5920-4a44-9742-c014ae5f67f7\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 18:37:06 crc kubenswrapper[4768]: I0223 18:37:06.753757 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 18:37:07 crc kubenswrapper[4768]: I0223 18:37:07.172273 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 23 18:37:07 crc kubenswrapper[4768]: I0223 18:37:07.345150 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ec824dba-5920-4a44-9742-c014ae5f67f7","Type":"ContainerStarted","Data":"a7ac9439296e8ad050a4901256e98073762b5ea65d1b79cbfca33f2c1fe79d0d"} Feb 23 18:37:07 crc kubenswrapper[4768]: I0223 18:37:07.843072 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:37:07 crc kubenswrapper[4768]: I0223 18:37:07.843518 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:37:08 crc kubenswrapper[4768]: I0223 18:37:08.045748 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:37:08 crc kubenswrapper[4768]: I0223 18:37:08.060805 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:37:08 crc kubenswrapper[4768]: I0223 18:37:08.061051 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:37:08 crc kubenswrapper[4768]: I0223 18:37:08.098443 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:37:08 crc kubenswrapper[4768]: I0223 18:37:08.353748 4768 generic.go:334] "Generic (PLEG): container finished" podID="ec824dba-5920-4a44-9742-c014ae5f67f7" containerID="4cf2c8f70c8656775e280ef7a2a08f1e72c006f6c040356e0f51a4399fec289a" exitCode=0 Feb 23 18:37:08 crc kubenswrapper[4768]: I0223 18:37:08.354194 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ec824dba-5920-4a44-9742-c014ae5f67f7","Type":"ContainerDied","Data":"4cf2c8f70c8656775e280ef7a2a08f1e72c006f6c040356e0f51a4399fec289a"} Feb 23 18:37:08 crc kubenswrapper[4768]: I0223 18:37:08.398626 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:37:08 crc kubenswrapper[4768]: I0223 18:37:08.399993 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:37:09 crc kubenswrapper[4768]: I0223 18:37:09.544623 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:37:09 crc kubenswrapper[4768]: I0223 18:37:09.545013 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:37:09 crc kubenswrapper[4768]: I0223 18:37:09.622958 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 18:37:09 crc kubenswrapper[4768]: I0223 18:37:09.682378 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec824dba-5920-4a44-9742-c014ae5f67f7-kubelet-dir\") pod \"ec824dba-5920-4a44-9742-c014ae5f67f7\" (UID: \"ec824dba-5920-4a44-9742-c014ae5f67f7\") " Feb 23 18:37:09 crc kubenswrapper[4768]: I0223 18:37:09.682588 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec824dba-5920-4a44-9742-c014ae5f67f7-kube-api-access\") pod \"ec824dba-5920-4a44-9742-c014ae5f67f7\" (UID: \"ec824dba-5920-4a44-9742-c014ae5f67f7\") " Feb 23 18:37:09 crc kubenswrapper[4768]: I0223 18:37:09.684335 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec824dba-5920-4a44-9742-c014ae5f67f7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ec824dba-5920-4a44-9742-c014ae5f67f7" (UID: "ec824dba-5920-4a44-9742-c014ae5f67f7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:37:09 crc kubenswrapper[4768]: I0223 18:37:09.691884 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec824dba-5920-4a44-9742-c014ae5f67f7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ec824dba-5920-4a44-9742-c014ae5f67f7" (UID: "ec824dba-5920-4a44-9742-c014ae5f67f7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:37:09 crc kubenswrapper[4768]: I0223 18:37:09.783612 4768 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec824dba-5920-4a44-9742-c014ae5f67f7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:09 crc kubenswrapper[4768]: I0223 18:37:09.783649 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ec824dba-5920-4a44-9742-c014ae5f67f7-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:10 crc kubenswrapper[4768]: I0223 18:37:10.364605 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 23 18:37:10 crc kubenswrapper[4768]: I0223 18:37:10.364684 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"ec824dba-5920-4a44-9742-c014ae5f67f7","Type":"ContainerDied","Data":"a7ac9439296e8ad050a4901256e98073762b5ea65d1b79cbfca33f2c1fe79d0d"} Feb 23 18:37:10 crc kubenswrapper[4768]: I0223 18:37:10.364717 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ac9439296e8ad050a4901256e98073762b5ea65d1b79cbfca33f2c1fe79d0d" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.396723 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 18:37:12 crc kubenswrapper[4768]: E0223 18:37:12.397794 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec824dba-5920-4a44-9742-c014ae5f67f7" containerName="pruner" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.397811 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec824dba-5920-4a44-9742-c014ae5f67f7" containerName="pruner" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.397918 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec824dba-5920-4a44-9742-c014ae5f67f7" containerName="pruner" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.398542 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.401124 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.401392 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.408399 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.584807 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34c3f34f-e575-4f6b-a730-b27b0e522912-var-lock\") pod \"installer-9-crc\" (UID: \"34c3f34f-e575-4f6b-a730-b27b0e522912\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.584864 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34c3f34f-e575-4f6b-a730-b27b0e522912-kube-api-access\") pod \"installer-9-crc\" (UID: \"34c3f34f-e575-4f6b-a730-b27b0e522912\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.584902 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34c3f34f-e575-4f6b-a730-b27b0e522912-kubelet-dir\") pod \"installer-9-crc\" (UID: \"34c3f34f-e575-4f6b-a730-b27b0e522912\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.686540 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34c3f34f-e575-4f6b-a730-b27b0e522912-var-lock\") pod \"installer-9-crc\" (UID: \"34c3f34f-e575-4f6b-a730-b27b0e522912\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.686594 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34c3f34f-e575-4f6b-a730-b27b0e522912-kube-api-access\") pod \"installer-9-crc\" (UID: \"34c3f34f-e575-4f6b-a730-b27b0e522912\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.686632 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34c3f34f-e575-4f6b-a730-b27b0e522912-kubelet-dir\") pod \"installer-9-crc\" (UID: \"34c3f34f-e575-4f6b-a730-b27b0e522912\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.686654 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34c3f34f-e575-4f6b-a730-b27b0e522912-var-lock\") pod \"installer-9-crc\" (UID: \"34c3f34f-e575-4f6b-a730-b27b0e522912\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.686688 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34c3f34f-e575-4f6b-a730-b27b0e522912-kubelet-dir\") pod \"installer-9-crc\" (UID: \"34c3f34f-e575-4f6b-a730-b27b0e522912\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.705397 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34c3f34f-e575-4f6b-a730-b27b0e522912-kube-api-access\") pod \"installer-9-crc\" (UID: \"34c3f34f-e575-4f6b-a730-b27b0e522912\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 23 18:37:12 crc kubenswrapper[4768]: I0223 18:37:12.712476 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 18:37:13 crc kubenswrapper[4768]: I0223 18:37:13.119921 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 23 18:37:13 crc kubenswrapper[4768]: W0223 18:37:13.172862 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod34c3f34f_e575_4f6b_a730_b27b0e522912.slice/crio-484ec3ac75ca6f6b8923137752f174cf10ab3149fe50afb0e20e753aa4b0c466 WatchSource:0}: Error finding container 484ec3ac75ca6f6b8923137752f174cf10ab3149fe50afb0e20e753aa4b0c466: Status 404 returned error can't find the container with id 484ec3ac75ca6f6b8923137752f174cf10ab3149fe50afb0e20e753aa4b0c466 Feb 23 18:37:13 crc kubenswrapper[4768]: I0223 18:37:13.384599 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6bbm" event={"ID":"4ab70e0f-e340-48a0-8c0b-af46ee8748ad","Type":"ContainerStarted","Data":"6e6d672ded362595627a3c635de354de73455560f9af3ec80c346f32219801c4"} Feb 23 18:37:13 crc kubenswrapper[4768]: I0223 18:37:13.387060 4768 generic.go:334] "Generic (PLEG): container finished" podID="900ac8ce-2407-49c9-991f-568685b4f3e5" containerID="3935eeab991746241c648135532a0d273247b688562c58d8ce5425625437a4a2" exitCode=0 Feb 23 18:37:13 crc kubenswrapper[4768]: I0223 18:37:13.387121 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwjrq" event={"ID":"900ac8ce-2407-49c9-991f-568685b4f3e5","Type":"ContainerDied","Data":"3935eeab991746241c648135532a0d273247b688562c58d8ce5425625437a4a2"} Feb 23 18:37:13 crc kubenswrapper[4768]: I0223 18:37:13.393293 4768 generic.go:334] "Generic (PLEG): container finished" podID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" containerID="e3c1cbcaf136bdf9d6fc33833e489e5ca8424cc279e134cfb4dc4b0b92d0f9c0" exitCode=0 Feb 23 18:37:13 crc kubenswrapper[4768]: I0223 18:37:13.393373 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvspb" event={"ID":"6ac0a902-00ab-4ec3-8284-06d478d2c4eb","Type":"ContainerDied","Data":"e3c1cbcaf136bdf9d6fc33833e489e5ca8424cc279e134cfb4dc4b0b92d0f9c0"} Feb 23 18:37:13 crc kubenswrapper[4768]: I0223 18:37:13.411944 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"34c3f34f-e575-4f6b-a730-b27b0e522912","Type":"ContainerStarted","Data":"484ec3ac75ca6f6b8923137752f174cf10ab3149fe50afb0e20e753aa4b0c466"} Feb 23 18:37:14 crc kubenswrapper[4768]: I0223 18:37:14.420718 4768 generic.go:334] "Generic (PLEG): container finished" podID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" containerID="6e6d672ded362595627a3c635de354de73455560f9af3ec80c346f32219801c4" exitCode=0 Feb 23 18:37:14 crc kubenswrapper[4768]: I0223 18:37:14.420810 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6bbm" event={"ID":"4ab70e0f-e340-48a0-8c0b-af46ee8748ad","Type":"ContainerDied","Data":"6e6d672ded362595627a3c635de354de73455560f9af3ec80c346f32219801c4"} Feb 23 18:37:14 crc kubenswrapper[4768]: I0223 18:37:14.423725 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwjrq" event={"ID":"900ac8ce-2407-49c9-991f-568685b4f3e5","Type":"ContainerStarted","Data":"cd6a22d907cba28997a463e67903b5022736fbc839bebe41f34af65bcf93ad63"} Feb 23 18:37:14 crc kubenswrapper[4768]: I0223 18:37:14.426197 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvspb" event={"ID":"6ac0a902-00ab-4ec3-8284-06d478d2c4eb","Type":"ContainerStarted","Data":"1b6a4585b2d7a77ffa09aaa2a60d8b0be043c1746f1179a159d3e0756f452989"} Feb 23 18:37:14 crc kubenswrapper[4768]: I0223 18:37:14.445024 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"34c3f34f-e575-4f6b-a730-b27b0e522912","Type":"ContainerStarted","Data":"f27ce67eebb95d5a0b14fd323a581e2949d18cb6abd8603cab4a6fc595b5f815"} Feb 23 18:37:14 crc kubenswrapper[4768]: I0223 18:37:14.475275 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rwjrq" podStartSLOduration=2.48275506 podStartE2EDuration="44.475238467s" podCreationTimestamp="2026-02-23 18:36:30 +0000 UTC" firstStartedPulling="2026-02-23 18:36:31.888091347 +0000 UTC m=+187.278577137" lastFinishedPulling="2026-02-23 18:37:13.880574754 +0000 UTC m=+229.271060544" observedRunningTime="2026-02-23 18:37:14.472342876 +0000 UTC m=+229.862828686" watchObservedRunningTime="2026-02-23 18:37:14.475238467 +0000 UTC m=+229.865724267" Feb 23 18:37:14 crc kubenswrapper[4768]: I0223 18:37:14.498040 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gvspb" podStartSLOduration=3.390144831 podStartE2EDuration="47.498016035s" podCreationTimestamp="2026-02-23 18:36:27 +0000 UTC" firstStartedPulling="2026-02-23 18:36:29.689471479 +0000 UTC m=+185.079957279" lastFinishedPulling="2026-02-23 18:37:13.797342683 +0000 UTC m=+229.187828483" observedRunningTime="2026-02-23 18:37:14.494063894 +0000 UTC m=+229.884549714" watchObservedRunningTime="2026-02-23 18:37:14.498016035 +0000 UTC m=+229.888501835" Feb 23 18:37:14 crc kubenswrapper[4768]: I0223 18:37:14.511756 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.5117235190000002 podStartE2EDuration="2.511723519s" podCreationTimestamp="2026-02-23 18:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:37:14.509144606 +0000 UTC m=+229.899630406" watchObservedRunningTime="2026-02-23 18:37:14.511723519 +0000 UTC m=+229.902209319" Feb 23 18:37:15 crc kubenswrapper[4768]: I0223 18:37:15.453102 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lrng" event={"ID":"1272d613-92f7-455a-80ec-00ed65aa20b9","Type":"ContainerStarted","Data":"edde0223f1ef6a4122dcd7f5b7bd926ec1c50d0ad6bc646f8777bcf1dd447d3c"} Feb 23 18:37:15 crc kubenswrapper[4768]: I0223 18:37:15.456304 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6bbm" event={"ID":"4ab70e0f-e340-48a0-8c0b-af46ee8748ad","Type":"ContainerStarted","Data":"9230734ef813170271334a61069fd47e301242efdda0c67f7be37a9b37cf115d"} Feb 23 18:37:15 crc kubenswrapper[4768]: I0223 18:37:15.506980 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b6bbm" podStartSLOduration=2.421704016 podStartE2EDuration="47.506937099s" podCreationTimestamp="2026-02-23 18:36:28 +0000 UTC" firstStartedPulling="2026-02-23 18:36:29.724895612 +0000 UTC m=+185.115381412" lastFinishedPulling="2026-02-23 18:37:14.810128695 +0000 UTC m=+230.200614495" observedRunningTime="2026-02-23 18:37:15.50660937 +0000 UTC m=+230.897095170" watchObservedRunningTime="2026-02-23 18:37:15.506937099 +0000 UTC m=+230.897422899" Feb 23 18:37:16 crc kubenswrapper[4768]: I0223 18:37:16.463804 4768 generic.go:334] "Generic (PLEG): container finished" podID="1272d613-92f7-455a-80ec-00ed65aa20b9" containerID="edde0223f1ef6a4122dcd7f5b7bd926ec1c50d0ad6bc646f8777bcf1dd447d3c" exitCode=0 Feb 23 18:37:16 crc kubenswrapper[4768]: I0223 18:37:16.463883 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lrng" event={"ID":"1272d613-92f7-455a-80ec-00ed65aa20b9","Type":"ContainerDied","Data":"edde0223f1ef6a4122dcd7f5b7bd926ec1c50d0ad6bc646f8777bcf1dd447d3c"} Feb 23 18:37:16 crc kubenswrapper[4768]: I0223 18:37:16.470197 4768 generic.go:334] "Generic (PLEG): container finished" podID="b1f0d482-3b79-4272-bd0a-976fd8053576" containerID="a3e21b8cc9243173f71c05168270ea37c572449a782e53d6ac391bd4b894ca35" exitCode=0 Feb 23 18:37:16 crc kubenswrapper[4768]: I0223 18:37:16.470268 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt29q" event={"ID":"b1f0d482-3b79-4272-bd0a-976fd8053576","Type":"ContainerDied","Data":"a3e21b8cc9243173f71c05168270ea37c572449a782e53d6ac391bd4b894ca35"} Feb 23 18:37:16 crc kubenswrapper[4768]: I0223 18:37:16.485749 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52sbl" event={"ID":"80aa487f-1e02-4a14-88da-a96a5f2a8f07","Type":"ContainerStarted","Data":"f31bbc669164470861dde11c129797c425c2d5a7b9aada31524bc43942092a33"} Feb 23 18:37:17 crc kubenswrapper[4768]: I0223 18:37:17.497403 4768 generic.go:334] "Generic (PLEG): container finished" podID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" containerID="f31bbc669164470861dde11c129797c425c2d5a7b9aada31524bc43942092a33" exitCode=0 Feb 23 18:37:17 crc kubenswrapper[4768]: I0223 18:37:17.497555 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52sbl" event={"ID":"80aa487f-1e02-4a14-88da-a96a5f2a8f07","Type":"ContainerDied","Data":"f31bbc669164470861dde11c129797c425c2d5a7b9aada31524bc43942092a33"} Feb 23 18:37:18 crc kubenswrapper[4768]: I0223 18:37:18.244112 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:37:18 crc kubenswrapper[4768]: I0223 18:37:18.244220 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:37:18 crc kubenswrapper[4768]: I0223 18:37:18.293224 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:37:18 crc kubenswrapper[4768]: I0223 18:37:18.549685 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:37:18 crc kubenswrapper[4768]: I0223 18:37:18.577609 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:37:18 crc kubenswrapper[4768]: I0223 18:37:18.577701 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:37:18 crc kubenswrapper[4768]: I0223 18:37:18.623664 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:37:19 crc kubenswrapper[4768]: I0223 18:37:19.513980 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt29q" event={"ID":"b1f0d482-3b79-4272-bd0a-976fd8053576","Type":"ContainerStarted","Data":"6d244d56e25d68bca5ead99594521b69d11be655677ea2adb3fa711d30dc0566"} Feb 23 18:37:19 crc kubenswrapper[4768]: I0223 18:37:19.544557 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jt29q" podStartSLOduration=4.150370044 podStartE2EDuration="50.544524668s" podCreationTimestamp="2026-02-23 18:36:29 +0000 UTC" firstStartedPulling="2026-02-23 18:36:31.912465707 +0000 UTC m=+187.302951507" lastFinishedPulling="2026-02-23 18:37:18.306620331 +0000 UTC m=+233.697106131" observedRunningTime="2026-02-23 18:37:19.541858293 +0000 UTC m=+234.932344103" watchObservedRunningTime="2026-02-23 18:37:19.544524668 +0000 UTC m=+234.935010478" Feb 23 18:37:19 crc kubenswrapper[4768]: I0223 18:37:19.562708 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:37:20 crc kubenswrapper[4768]: I0223 18:37:20.065308 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:37:20 crc kubenswrapper[4768]: I0223 18:37:20.065364 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:37:20 crc kubenswrapper[4768]: I0223 18:37:20.463373 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:37:20 crc kubenswrapper[4768]: I0223 18:37:20.463429 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:37:20 crc kubenswrapper[4768]: I0223 18:37:20.522049 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52sbl" event={"ID":"80aa487f-1e02-4a14-88da-a96a5f2a8f07","Type":"ContainerStarted","Data":"f90a09537949dfe88e7b117083c7035e4dcad874c64bce7b2f4778f5bb7c706a"} Feb 23 18:37:20 crc kubenswrapper[4768]: I0223 18:37:20.524994 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lrng" event={"ID":"1272d613-92f7-455a-80ec-00ed65aa20b9","Type":"ContainerStarted","Data":"844f771e8f614d6bd071bd5c5da5342833e42ac4a611e6bc9dfc26de84b77557"} Feb 23 18:37:20 crc kubenswrapper[4768]: I0223 18:37:20.547195 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6lrng" podStartSLOduration=2.928794877 podStartE2EDuration="49.547177267s" podCreationTimestamp="2026-02-23 18:36:31 +0000 UTC" firstStartedPulling="2026-02-23 18:36:33.003451103 +0000 UTC m=+188.393936903" lastFinishedPulling="2026-02-23 18:37:19.621833503 +0000 UTC m=+235.012319293" observedRunningTime="2026-02-23 18:37:20.546155998 +0000 UTC m=+235.936641808" watchObservedRunningTime="2026-02-23 18:37:20.547177267 +0000 UTC m=+235.937663067" Feb 23 18:37:20 crc kubenswrapper[4768]: I0223 18:37:20.566068 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:37:20 crc kubenswrapper[4768]: I0223 18:37:20.640668 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.081733 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gvspb"] Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.082440 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gvspb" podUID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" containerName="registry-server" containerID="cri-o://1b6a4585b2d7a77ffa09aaa2a60d8b0be043c1746f1179a159d3e0756f452989" gracePeriod=2 Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.110870 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jt29q" podUID="b1f0d482-3b79-4272-bd0a-976fd8053576" containerName="registry-server" probeResult="failure" output=< Feb 23 18:37:21 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 23 18:37:21 crc kubenswrapper[4768]: > Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.458277 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.511976 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.515535 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.555552 4768 generic.go:334] "Generic (PLEG): container finished" podID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" containerID="1b6a4585b2d7a77ffa09aaa2a60d8b0be043c1746f1179a159d3e0756f452989" exitCode=0 Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.556642 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gvspb" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.557102 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvspb" event={"ID":"6ac0a902-00ab-4ec3-8284-06d478d2c4eb","Type":"ContainerDied","Data":"1b6a4585b2d7a77ffa09aaa2a60d8b0be043c1746f1179a159d3e0756f452989"} Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.557154 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvspb" event={"ID":"6ac0a902-00ab-4ec3-8284-06d478d2c4eb","Type":"ContainerDied","Data":"71af6cb73d94865c52c329180d429e6c7294e1bc0f31a8537a6740d22d3dbf49"} Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.557176 4768 scope.go:117] "RemoveContainer" containerID="1b6a4585b2d7a77ffa09aaa2a60d8b0be043c1746f1179a159d3e0756f452989" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.581546 4768 scope.go:117] "RemoveContainer" containerID="e3c1cbcaf136bdf9d6fc33833e489e5ca8424cc279e134cfb4dc4b0b92d0f9c0" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.585084 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-52sbl" podStartSLOduration=4.349106357 podStartE2EDuration="51.585067762s" podCreationTimestamp="2026-02-23 18:36:30 +0000 UTC" firstStartedPulling="2026-02-23 18:36:33.022483866 +0000 UTC m=+188.412969666" lastFinishedPulling="2026-02-23 18:37:20.258445261 +0000 UTC m=+235.648931071" observedRunningTime="2026-02-23 18:37:21.58428111 +0000 UTC m=+236.974766930" watchObservedRunningTime="2026-02-23 18:37:21.585067762 +0000 UTC m=+236.975553582" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.616552 4768 scope.go:117] "RemoveContainer" containerID="5bfdc0f02dcbb612a36e821cf1d71f6408f7e64334a1e1d325bc66473c3392ec" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.621508 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-utilities\") pod \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\" (UID: \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\") " Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.621686 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhznd\" (UniqueName: \"kubernetes.io/projected/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-kube-api-access-dhznd\") pod \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\" (UID: \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\") " Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.621760 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-catalog-content\") pod \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\" (UID: \"6ac0a902-00ab-4ec3-8284-06d478d2c4eb\") " Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.622594 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-utilities" (OuterVolumeSpecName: "utilities") pod "6ac0a902-00ab-4ec3-8284-06d478d2c4eb" (UID: "6ac0a902-00ab-4ec3-8284-06d478d2c4eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.633429 4768 scope.go:117] "RemoveContainer" containerID="1b6a4585b2d7a77ffa09aaa2a60d8b0be043c1746f1179a159d3e0756f452989" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.633516 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-kube-api-access-dhznd" (OuterVolumeSpecName: "kube-api-access-dhznd") pod "6ac0a902-00ab-4ec3-8284-06d478d2c4eb" (UID: "6ac0a902-00ab-4ec3-8284-06d478d2c4eb"). InnerVolumeSpecName "kube-api-access-dhznd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:37:21 crc kubenswrapper[4768]: E0223 18:37:21.636463 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b6a4585b2d7a77ffa09aaa2a60d8b0be043c1746f1179a159d3e0756f452989\": container with ID starting with 1b6a4585b2d7a77ffa09aaa2a60d8b0be043c1746f1179a159d3e0756f452989 not found: ID does not exist" containerID="1b6a4585b2d7a77ffa09aaa2a60d8b0be043c1746f1179a159d3e0756f452989" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.636518 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b6a4585b2d7a77ffa09aaa2a60d8b0be043c1746f1179a159d3e0756f452989"} err="failed to get container status \"1b6a4585b2d7a77ffa09aaa2a60d8b0be043c1746f1179a159d3e0756f452989\": rpc error: code = NotFound desc = could not find container \"1b6a4585b2d7a77ffa09aaa2a60d8b0be043c1746f1179a159d3e0756f452989\": container with ID starting with 1b6a4585b2d7a77ffa09aaa2a60d8b0be043c1746f1179a159d3e0756f452989 not found: ID does not exist" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.636545 4768 scope.go:117] "RemoveContainer" containerID="e3c1cbcaf136bdf9d6fc33833e489e5ca8424cc279e134cfb4dc4b0b92d0f9c0" Feb 23 18:37:21 crc kubenswrapper[4768]: E0223 18:37:21.637131 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c1cbcaf136bdf9d6fc33833e489e5ca8424cc279e134cfb4dc4b0b92d0f9c0\": container with ID starting with e3c1cbcaf136bdf9d6fc33833e489e5ca8424cc279e134cfb4dc4b0b92d0f9c0 not found: ID does not exist" containerID="e3c1cbcaf136bdf9d6fc33833e489e5ca8424cc279e134cfb4dc4b0b92d0f9c0" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.637162 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c1cbcaf136bdf9d6fc33833e489e5ca8424cc279e134cfb4dc4b0b92d0f9c0"} err="failed to get container status \"e3c1cbcaf136bdf9d6fc33833e489e5ca8424cc279e134cfb4dc4b0b92d0f9c0\": rpc error: code = NotFound desc = could not find container \"e3c1cbcaf136bdf9d6fc33833e489e5ca8424cc279e134cfb4dc4b0b92d0f9c0\": container with ID starting with e3c1cbcaf136bdf9d6fc33833e489e5ca8424cc279e134cfb4dc4b0b92d0f9c0 not found: ID does not exist" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.637181 4768 scope.go:117] "RemoveContainer" containerID="5bfdc0f02dcbb612a36e821cf1d71f6408f7e64334a1e1d325bc66473c3392ec" Feb 23 18:37:21 crc kubenswrapper[4768]: E0223 18:37:21.637626 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bfdc0f02dcbb612a36e821cf1d71f6408f7e64334a1e1d325bc66473c3392ec\": container with ID starting with 5bfdc0f02dcbb612a36e821cf1d71f6408f7e64334a1e1d325bc66473c3392ec not found: ID does not exist" containerID="5bfdc0f02dcbb612a36e821cf1d71f6408f7e64334a1e1d325bc66473c3392ec" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.637653 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfdc0f02dcbb612a36e821cf1d71f6408f7e64334a1e1d325bc66473c3392ec"} err="failed to get container status \"5bfdc0f02dcbb612a36e821cf1d71f6408f7e64334a1e1d325bc66473c3392ec\": rpc error: code = NotFound desc = could not find container \"5bfdc0f02dcbb612a36e821cf1d71f6408f7e64334a1e1d325bc66473c3392ec\": container with ID starting with 5bfdc0f02dcbb612a36e821cf1d71f6408f7e64334a1e1d325bc66473c3392ec not found: ID does not exist" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.678872 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ac0a902-00ab-4ec3-8284-06d478d2c4eb" (UID: "6ac0a902-00ab-4ec3-8284-06d478d2c4eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.723110 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhznd\" (UniqueName: \"kubernetes.io/projected/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-kube-api-access-dhznd\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.723169 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.723183 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ac0a902-00ab-4ec3-8284-06d478d2c4eb-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.890639 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gvspb"] Feb 23 18:37:21 crc kubenswrapper[4768]: I0223 18:37:21.895654 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gvspb"] Feb 23 18:37:22 crc kubenswrapper[4768]: I0223 18:37:22.570626 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6lrng" podUID="1272d613-92f7-455a-80ec-00ed65aa20b9" containerName="registry-server" probeResult="failure" output=< Feb 23 18:37:22 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 23 18:37:22 crc kubenswrapper[4768]: > Feb 23 18:37:23 crc kubenswrapper[4768]: I0223 18:37:23.315082 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" path="/var/lib/kubelet/pods/6ac0a902-00ab-4ec3-8284-06d478d2c4eb/volumes" Feb 23 18:37:23 crc kubenswrapper[4768]: I0223 18:37:23.482194 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b6bbm"] Feb 23 18:37:23 crc kubenswrapper[4768]: I0223 18:37:23.482511 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b6bbm" podUID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" containerName="registry-server" containerID="cri-o://9230734ef813170271334a61069fd47e301242efdda0c67f7be37a9b37cf115d" gracePeriod=2 Feb 23 18:37:23 crc kubenswrapper[4768]: I0223 18:37:23.681238 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwjrq"] Feb 23 18:37:23 crc kubenswrapper[4768]: I0223 18:37:23.681626 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rwjrq" podUID="900ac8ce-2407-49c9-991f-568685b4f3e5" containerName="registry-server" containerID="cri-o://cd6a22d907cba28997a463e67903b5022736fbc839bebe41f34af65bcf93ad63" gracePeriod=2 Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.462324 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.574462 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v25jn\" (UniqueName: \"kubernetes.io/projected/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-kube-api-access-v25jn\") pod \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\" (UID: \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\") " Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.574566 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-utilities\") pod \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\" (UID: \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\") " Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.574667 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-catalog-content\") pod \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\" (UID: \"4ab70e0f-e340-48a0-8c0b-af46ee8748ad\") " Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.576226 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-utilities" (OuterVolumeSpecName: "utilities") pod "4ab70e0f-e340-48a0-8c0b-af46ee8748ad" (UID: "4ab70e0f-e340-48a0-8c0b-af46ee8748ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.587627 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-kube-api-access-v25jn" (OuterVolumeSpecName: "kube-api-access-v25jn") pod "4ab70e0f-e340-48a0-8c0b-af46ee8748ad" (UID: "4ab70e0f-e340-48a0-8c0b-af46ee8748ad"). InnerVolumeSpecName "kube-api-access-v25jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.593168 4768 generic.go:334] "Generic (PLEG): container finished" podID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" containerID="9230734ef813170271334a61069fd47e301242efdda0c67f7be37a9b37cf115d" exitCode=0 Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.593320 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6bbm" event={"ID":"4ab70e0f-e340-48a0-8c0b-af46ee8748ad","Type":"ContainerDied","Data":"9230734ef813170271334a61069fd47e301242efdda0c67f7be37a9b37cf115d"} Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.593358 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b6bbm" event={"ID":"4ab70e0f-e340-48a0-8c0b-af46ee8748ad","Type":"ContainerDied","Data":"f54f3e169127d98e65abb9642d51510659d13bb439f6a7e85278f2243ed7d144"} Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.593395 4768 scope.go:117] "RemoveContainer" containerID="9230734ef813170271334a61069fd47e301242efdda0c67f7be37a9b37cf115d" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.593626 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b6bbm" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.599581 4768 generic.go:334] "Generic (PLEG): container finished" podID="900ac8ce-2407-49c9-991f-568685b4f3e5" containerID="cd6a22d907cba28997a463e67903b5022736fbc839bebe41f34af65bcf93ad63" exitCode=0 Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.599640 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwjrq" event={"ID":"900ac8ce-2407-49c9-991f-568685b4f3e5","Type":"ContainerDied","Data":"cd6a22d907cba28997a463e67903b5022736fbc839bebe41f34af65bcf93ad63"} Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.612052 4768 scope.go:117] "RemoveContainer" containerID="6e6d672ded362595627a3c635de354de73455560f9af3ec80c346f32219801c4" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.634519 4768 scope.go:117] "RemoveContainer" containerID="7b1d231bf75bb7bc7a28fb24f1c3e12c8b33f89e94901c134c10bdca3d9acf0e" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.655764 4768 scope.go:117] "RemoveContainer" containerID="9230734ef813170271334a61069fd47e301242efdda0c67f7be37a9b37cf115d" Feb 23 18:37:24 crc kubenswrapper[4768]: E0223 18:37:24.656301 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9230734ef813170271334a61069fd47e301242efdda0c67f7be37a9b37cf115d\": container with ID starting with 9230734ef813170271334a61069fd47e301242efdda0c67f7be37a9b37cf115d not found: ID does not exist" containerID="9230734ef813170271334a61069fd47e301242efdda0c67f7be37a9b37cf115d" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.656544 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9230734ef813170271334a61069fd47e301242efdda0c67f7be37a9b37cf115d"} err="failed to get container status \"9230734ef813170271334a61069fd47e301242efdda0c67f7be37a9b37cf115d\": rpc error: code = NotFound desc = could not find container \"9230734ef813170271334a61069fd47e301242efdda0c67f7be37a9b37cf115d\": container with ID starting with 9230734ef813170271334a61069fd47e301242efdda0c67f7be37a9b37cf115d not found: ID does not exist" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.656659 4768 scope.go:117] "RemoveContainer" containerID="6e6d672ded362595627a3c635de354de73455560f9af3ec80c346f32219801c4" Feb 23 18:37:24 crc kubenswrapper[4768]: E0223 18:37:24.657304 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e6d672ded362595627a3c635de354de73455560f9af3ec80c346f32219801c4\": container with ID starting with 6e6d672ded362595627a3c635de354de73455560f9af3ec80c346f32219801c4 not found: ID does not exist" containerID="6e6d672ded362595627a3c635de354de73455560f9af3ec80c346f32219801c4" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.657486 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e6d672ded362595627a3c635de354de73455560f9af3ec80c346f32219801c4"} err="failed to get container status \"6e6d672ded362595627a3c635de354de73455560f9af3ec80c346f32219801c4\": rpc error: code = NotFound desc = could not find container \"6e6d672ded362595627a3c635de354de73455560f9af3ec80c346f32219801c4\": container with ID starting with 6e6d672ded362595627a3c635de354de73455560f9af3ec80c346f32219801c4 not found: ID does not exist" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.657637 4768 scope.go:117] "RemoveContainer" containerID="7b1d231bf75bb7bc7a28fb24f1c3e12c8b33f89e94901c134c10bdca3d9acf0e" Feb 23 18:37:24 crc kubenswrapper[4768]: E0223 18:37:24.658064 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b1d231bf75bb7bc7a28fb24f1c3e12c8b33f89e94901c134c10bdca3d9acf0e\": container with ID starting with 7b1d231bf75bb7bc7a28fb24f1c3e12c8b33f89e94901c134c10bdca3d9acf0e not found: ID does not exist" containerID="7b1d231bf75bb7bc7a28fb24f1c3e12c8b33f89e94901c134c10bdca3d9acf0e" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.658233 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b1d231bf75bb7bc7a28fb24f1c3e12c8b33f89e94901c134c10bdca3d9acf0e"} err="failed to get container status \"7b1d231bf75bb7bc7a28fb24f1c3e12c8b33f89e94901c134c10bdca3d9acf0e\": rpc error: code = NotFound desc = could not find container \"7b1d231bf75bb7bc7a28fb24f1c3e12c8b33f89e94901c134c10bdca3d9acf0e\": container with ID starting with 7b1d231bf75bb7bc7a28fb24f1c3e12c8b33f89e94901c134c10bdca3d9acf0e not found: ID does not exist" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.673474 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ab70e0f-e340-48a0-8c0b-af46ee8748ad" (UID: "4ab70e0f-e340-48a0-8c0b-af46ee8748ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.676638 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v25jn\" (UniqueName: \"kubernetes.io/projected/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-kube-api-access-v25jn\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.676659 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.676670 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ab70e0f-e340-48a0-8c0b-af46ee8748ad-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.936280 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.946311 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b6bbm"] Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.952897 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b6bbm"] Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.984678 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900ac8ce-2407-49c9-991f-568685b4f3e5-utilities\") pod \"900ac8ce-2407-49c9-991f-568685b4f3e5\" (UID: \"900ac8ce-2407-49c9-991f-568685b4f3e5\") " Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.984758 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900ac8ce-2407-49c9-991f-568685b4f3e5-catalog-content\") pod \"900ac8ce-2407-49c9-991f-568685b4f3e5\" (UID: \"900ac8ce-2407-49c9-991f-568685b4f3e5\") " Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.984836 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j444v\" (UniqueName: \"kubernetes.io/projected/900ac8ce-2407-49c9-991f-568685b4f3e5-kube-api-access-j444v\") pod \"900ac8ce-2407-49c9-991f-568685b4f3e5\" (UID: \"900ac8ce-2407-49c9-991f-568685b4f3e5\") " Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.985634 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900ac8ce-2407-49c9-991f-568685b4f3e5-utilities" (OuterVolumeSpecName: "utilities") pod "900ac8ce-2407-49c9-991f-568685b4f3e5" (UID: "900ac8ce-2407-49c9-991f-568685b4f3e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:37:24 crc kubenswrapper[4768]: I0223 18:37:24.988453 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900ac8ce-2407-49c9-991f-568685b4f3e5-kube-api-access-j444v" (OuterVolumeSpecName: "kube-api-access-j444v") pod "900ac8ce-2407-49c9-991f-568685b4f3e5" (UID: "900ac8ce-2407-49c9-991f-568685b4f3e5"). InnerVolumeSpecName "kube-api-access-j444v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:37:25 crc kubenswrapper[4768]: I0223 18:37:25.009070 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900ac8ce-2407-49c9-991f-568685b4f3e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "900ac8ce-2407-49c9-991f-568685b4f3e5" (UID: "900ac8ce-2407-49c9-991f-568685b4f3e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:37:25 crc kubenswrapper[4768]: I0223 18:37:25.085964 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900ac8ce-2407-49c9-991f-568685b4f3e5-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:25 crc kubenswrapper[4768]: I0223 18:37:25.086006 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900ac8ce-2407-49c9-991f-568685b4f3e5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:25 crc kubenswrapper[4768]: I0223 18:37:25.086018 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j444v\" (UniqueName: \"kubernetes.io/projected/900ac8ce-2407-49c9-991f-568685b4f3e5-kube-api-access-j444v\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:25 crc kubenswrapper[4768]: I0223 18:37:25.322048 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" path="/var/lib/kubelet/pods/4ab70e0f-e340-48a0-8c0b-af46ee8748ad/volumes" Feb 23 18:37:25 crc kubenswrapper[4768]: I0223 18:37:25.607159 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rwjrq" event={"ID":"900ac8ce-2407-49c9-991f-568685b4f3e5","Type":"ContainerDied","Data":"9e486f9739b35a4dd54cede55084420c96f92fea0b929759135ba20ad32c6a18"} Feb 23 18:37:25 crc kubenswrapper[4768]: I0223 18:37:25.607229 4768 scope.go:117] "RemoveContainer" containerID="cd6a22d907cba28997a463e67903b5022736fbc839bebe41f34af65bcf93ad63" Feb 23 18:37:25 crc kubenswrapper[4768]: I0223 18:37:25.607319 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rwjrq" Feb 23 18:37:25 crc kubenswrapper[4768]: I0223 18:37:25.623733 4768 scope.go:117] "RemoveContainer" containerID="3935eeab991746241c648135532a0d273247b688562c58d8ce5425625437a4a2" Feb 23 18:37:25 crc kubenswrapper[4768]: I0223 18:37:25.627006 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwjrq"] Feb 23 18:37:25 crc kubenswrapper[4768]: I0223 18:37:25.630500 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rwjrq"] Feb 23 18:37:25 crc kubenswrapper[4768]: I0223 18:37:25.636551 4768 scope.go:117] "RemoveContainer" containerID="2e988bf9132d5eb74bfc7f4ff5b83a18374352255cd47201982b68cfcfecae94" Feb 23 18:37:27 crc kubenswrapper[4768]: I0223 18:37:27.319813 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900ac8ce-2407-49c9-991f-568685b4f3e5" path="/var/lib/kubelet/pods/900ac8ce-2407-49c9-991f-568685b4f3e5/volumes" Feb 23 18:37:28 crc kubenswrapper[4768]: I0223 18:37:28.511508 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" podUID="31ebc831-fd3d-4dfa-8b67-a0fa553b3472" containerName="oauth-openshift" containerID="cri-o://5a5ee4235b6765417a0c8d597de16520413df5d54f8306114a9fe106765ab650" gracePeriod=15 Feb 23 18:37:28 crc kubenswrapper[4768]: I0223 18:37:28.902194 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.069715 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-audit-dir\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.069809 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-cliconfig\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.069857 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-serving-cert\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.069897 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-error\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.069939 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-provider-selection\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.069979 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-trusted-ca-bundle\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.070011 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-audit-policies\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.070038 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-idp-0-file-data\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.070084 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-service-ca\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.070130 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-router-certs\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.070183 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqcsv\" (UniqueName: \"kubernetes.io/projected/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-kube-api-access-wqcsv\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.070216 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-login\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.070244 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-session\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.070308 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-ocp-branding-template\") pod \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\" (UID: \"31ebc831-fd3d-4dfa-8b67-a0fa553b3472\") " Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.070415 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.070925 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.071356 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.071487 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.071743 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.071995 4768 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.072024 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.072040 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.072056 4768 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.072074 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.076491 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-kube-api-access-wqcsv" (OuterVolumeSpecName: "kube-api-access-wqcsv") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "kube-api-access-wqcsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.082441 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.082567 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.082797 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.083016 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.083099 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.083312 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.083590 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.083732 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "31ebc831-fd3d-4dfa-8b67-a0fa553b3472" (UID: "31ebc831-fd3d-4dfa-8b67-a0fa553b3472"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.173963 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.174030 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.174076 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqcsv\" (UniqueName: \"kubernetes.io/projected/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-kube-api-access-wqcsv\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.174101 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.174129 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.174154 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.174177 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.174200 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.174224 4768 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/31ebc831-fd3d-4dfa-8b67-a0fa553b3472-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.637614 4768 generic.go:334] "Generic (PLEG): container finished" podID="31ebc831-fd3d-4dfa-8b67-a0fa553b3472" containerID="5a5ee4235b6765417a0c8d597de16520413df5d54f8306114a9fe106765ab650" exitCode=0 Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.637769 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.637799 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" event={"ID":"31ebc831-fd3d-4dfa-8b67-a0fa553b3472","Type":"ContainerDied","Data":"5a5ee4235b6765417a0c8d597de16520413df5d54f8306114a9fe106765ab650"} Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.639170 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lsn69" event={"ID":"31ebc831-fd3d-4dfa-8b67-a0fa553b3472","Type":"ContainerDied","Data":"a84a0edd9752ac55ab597f6cc88af5ff05fa88c99f8fb4577953434be7732e3d"} Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.639202 4768 scope.go:117] "RemoveContainer" containerID="5a5ee4235b6765417a0c8d597de16520413df5d54f8306114a9fe106765ab650" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.666037 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lsn69"] Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.668858 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lsn69"] Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.675077 4768 scope.go:117] "RemoveContainer" containerID="5a5ee4235b6765417a0c8d597de16520413df5d54f8306114a9fe106765ab650" Feb 23 18:37:29 crc kubenswrapper[4768]: E0223 18:37:29.675679 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a5ee4235b6765417a0c8d597de16520413df5d54f8306114a9fe106765ab650\": container with ID starting with 5a5ee4235b6765417a0c8d597de16520413df5d54f8306114a9fe106765ab650 not found: ID does not exist" containerID="5a5ee4235b6765417a0c8d597de16520413df5d54f8306114a9fe106765ab650" Feb 23 18:37:29 crc kubenswrapper[4768]: I0223 18:37:29.675817 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a5ee4235b6765417a0c8d597de16520413df5d54f8306114a9fe106765ab650"} err="failed to get container status \"5a5ee4235b6765417a0c8d597de16520413df5d54f8306114a9fe106765ab650\": rpc error: code = NotFound desc = could not find container \"5a5ee4235b6765417a0c8d597de16520413df5d54f8306114a9fe106765ab650\": container with ID starting with 5a5ee4235b6765417a0c8d597de16520413df5d54f8306114a9fe106765ab650 not found: ID does not exist" Feb 23 18:37:30 crc kubenswrapper[4768]: I0223 18:37:30.144684 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:37:30 crc kubenswrapper[4768]: I0223 18:37:30.213070 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:37:31 crc kubenswrapper[4768]: I0223 18:37:31.078045 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:37:31 crc kubenswrapper[4768]: I0223 18:37:31.078104 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:37:31 crc kubenswrapper[4768]: I0223 18:37:31.137716 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:37:31 crc kubenswrapper[4768]: I0223 18:37:31.315419 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ebc831-fd3d-4dfa-8b67-a0fa553b3472" path="/var/lib/kubelet/pods/31ebc831-fd3d-4dfa-8b67-a0fa553b3472/volumes" Feb 23 18:37:31 crc kubenswrapper[4768]: I0223 18:37:31.563902 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:37:31 crc kubenswrapper[4768]: I0223 18:37:31.618831 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:37:31 crc kubenswrapper[4768]: I0223 18:37:31.694236 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:37:33 crc kubenswrapper[4768]: I0223 18:37:33.485354 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lrng"] Feb 23 18:37:33 crc kubenswrapper[4768]: I0223 18:37:33.486225 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6lrng" podUID="1272d613-92f7-455a-80ec-00ed65aa20b9" containerName="registry-server" containerID="cri-o://844f771e8f614d6bd071bd5c5da5342833e42ac4a611e6bc9dfc26de84b77557" gracePeriod=2 Feb 23 18:37:33 crc kubenswrapper[4768]: I0223 18:37:33.669522 4768 generic.go:334] "Generic (PLEG): container finished" podID="1272d613-92f7-455a-80ec-00ed65aa20b9" containerID="844f771e8f614d6bd071bd5c5da5342833e42ac4a611e6bc9dfc26de84b77557" exitCode=0 Feb 23 18:37:33 crc kubenswrapper[4768]: I0223 18:37:33.669609 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lrng" event={"ID":"1272d613-92f7-455a-80ec-00ed65aa20b9","Type":"ContainerDied","Data":"844f771e8f614d6bd071bd5c5da5342833e42ac4a611e6bc9dfc26de84b77557"} Feb 23 18:37:33 crc kubenswrapper[4768]: I0223 18:37:33.941426 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:37:33 crc kubenswrapper[4768]: I0223 18:37:33.947174 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1272d613-92f7-455a-80ec-00ed65aa20b9-utilities\") pod \"1272d613-92f7-455a-80ec-00ed65aa20b9\" (UID: \"1272d613-92f7-455a-80ec-00ed65aa20b9\") " Feb 23 18:37:33 crc kubenswrapper[4768]: I0223 18:37:33.947216 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvmrf\" (UniqueName: \"kubernetes.io/projected/1272d613-92f7-455a-80ec-00ed65aa20b9-kube-api-access-fvmrf\") pod \"1272d613-92f7-455a-80ec-00ed65aa20b9\" (UID: \"1272d613-92f7-455a-80ec-00ed65aa20b9\") " Feb 23 18:37:33 crc kubenswrapper[4768]: I0223 18:37:33.947352 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1272d613-92f7-455a-80ec-00ed65aa20b9-catalog-content\") pod \"1272d613-92f7-455a-80ec-00ed65aa20b9\" (UID: \"1272d613-92f7-455a-80ec-00ed65aa20b9\") " Feb 23 18:37:33 crc kubenswrapper[4768]: I0223 18:37:33.950797 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1272d613-92f7-455a-80ec-00ed65aa20b9-utilities" (OuterVolumeSpecName: "utilities") pod "1272d613-92f7-455a-80ec-00ed65aa20b9" (UID: "1272d613-92f7-455a-80ec-00ed65aa20b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:37:33 crc kubenswrapper[4768]: I0223 18:37:33.958114 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1272d613-92f7-455a-80ec-00ed65aa20b9-kube-api-access-fvmrf" (OuterVolumeSpecName: "kube-api-access-fvmrf") pod "1272d613-92f7-455a-80ec-00ed65aa20b9" (UID: "1272d613-92f7-455a-80ec-00ed65aa20b9"). InnerVolumeSpecName "kube-api-access-fvmrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.048228 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1272d613-92f7-455a-80ec-00ed65aa20b9-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.048305 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvmrf\" (UniqueName: \"kubernetes.io/projected/1272d613-92f7-455a-80ec-00ed65aa20b9-kube-api-access-fvmrf\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.093583 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1272d613-92f7-455a-80ec-00ed65aa20b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1272d613-92f7-455a-80ec-00ed65aa20b9" (UID: "1272d613-92f7-455a-80ec-00ed65aa20b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.120424 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7b57696689-9qbqw"] Feb 23 18:37:34 crc kubenswrapper[4768]: E0223 18:37:34.120810 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" containerName="extract-utilities" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.120831 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" containerName="extract-utilities" Feb 23 18:37:34 crc kubenswrapper[4768]: E0223 18:37:34.120849 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" containerName="extract-utilities" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.120947 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" containerName="extract-utilities" Feb 23 18:37:34 crc kubenswrapper[4768]: E0223 18:37:34.120963 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900ac8ce-2407-49c9-991f-568685b4f3e5" containerName="extract-content" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.120977 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="900ac8ce-2407-49c9-991f-568685b4f3e5" containerName="extract-content" Feb 23 18:37:34 crc kubenswrapper[4768]: E0223 18:37:34.120990 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1272d613-92f7-455a-80ec-00ed65aa20b9" containerName="extract-utilities" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121000 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1272d613-92f7-455a-80ec-00ed65aa20b9" containerName="extract-utilities" Feb 23 18:37:34 crc kubenswrapper[4768]: E0223 18:37:34.121014 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" containerName="registry-server" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121022 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" containerName="registry-server" Feb 23 18:37:34 crc kubenswrapper[4768]: E0223 18:37:34.121038 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900ac8ce-2407-49c9-991f-568685b4f3e5" containerName="extract-utilities" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121047 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="900ac8ce-2407-49c9-991f-568685b4f3e5" containerName="extract-utilities" Feb 23 18:37:34 crc kubenswrapper[4768]: E0223 18:37:34.121058 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" containerName="extract-content" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121068 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" containerName="extract-content" Feb 23 18:37:34 crc kubenswrapper[4768]: E0223 18:37:34.121083 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1272d613-92f7-455a-80ec-00ed65aa20b9" containerName="registry-server" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121095 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1272d613-92f7-455a-80ec-00ed65aa20b9" containerName="registry-server" Feb 23 18:37:34 crc kubenswrapper[4768]: E0223 18:37:34.121108 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1272d613-92f7-455a-80ec-00ed65aa20b9" containerName="extract-content" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121118 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1272d613-92f7-455a-80ec-00ed65aa20b9" containerName="extract-content" Feb 23 18:37:34 crc kubenswrapper[4768]: E0223 18:37:34.121131 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ebc831-fd3d-4dfa-8b67-a0fa553b3472" containerName="oauth-openshift" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121141 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ebc831-fd3d-4dfa-8b67-a0fa553b3472" containerName="oauth-openshift" Feb 23 18:37:34 crc kubenswrapper[4768]: E0223 18:37:34.121150 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900ac8ce-2407-49c9-991f-568685b4f3e5" containerName="registry-server" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121160 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="900ac8ce-2407-49c9-991f-568685b4f3e5" containerName="registry-server" Feb 23 18:37:34 crc kubenswrapper[4768]: E0223 18:37:34.121170 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" containerName="registry-server" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121178 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" containerName="registry-server" Feb 23 18:37:34 crc kubenswrapper[4768]: E0223 18:37:34.121190 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" containerName="extract-content" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121198 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" containerName="extract-content" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121401 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="900ac8ce-2407-49c9-991f-568685b4f3e5" containerName="registry-server" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121416 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ebc831-fd3d-4dfa-8b67-a0fa553b3472" containerName="oauth-openshift" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121427 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1272d613-92f7-455a-80ec-00ed65aa20b9" containerName="registry-server" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121438 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac0a902-00ab-4ec3-8284-06d478d2c4eb" containerName="registry-server" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.121470 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab70e0f-e340-48a0-8c0b-af46ee8748ad" containerName="registry-server" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.122447 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.130581 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.130632 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.130581 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.130902 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.131103 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.131332 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.131519 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.131556 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.133234 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.135037 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.136895 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.138887 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.141067 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.142755 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b57696689-9qbqw"] Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.145364 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.156537 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.187809 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.187908 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-session\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.187937 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.187964 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.187989 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.188124 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.188151 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-user-template-error\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.188178 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.188199 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40293355-4e78-4dc4-802c-cf38f2898c35-audit-policies\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.188221 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40293355-4e78-4dc4-802c-cf38f2898c35-audit-dir\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.188263 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.188284 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-user-template-login\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.188313 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h82ml\" (UniqueName: \"kubernetes.io/projected/40293355-4e78-4dc4-802c-cf38f2898c35-kube-api-access-h82ml\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.188332 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.188373 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1272d613-92f7-455a-80ec-00ed65aa20b9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.289558 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.289624 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-session\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.289660 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.289689 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.289723 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.289772 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.289857 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-user-template-error\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.290190 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.290276 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40293355-4e78-4dc4-802c-cf38f2898c35-audit-policies\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.290344 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40293355-4e78-4dc4-802c-cf38f2898c35-audit-dir\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.290370 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.290499 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40293355-4e78-4dc4-802c-cf38f2898c35-audit-dir\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.290431 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-user-template-login\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.291726 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.291806 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.291855 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40293355-4e78-4dc4-802c-cf38f2898c35-audit-policies\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.292304 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h82ml\" (UniqueName: \"kubernetes.io/projected/40293355-4e78-4dc4-802c-cf38f2898c35-kube-api-access-h82ml\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.292381 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.296613 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-session\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.296674 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.297921 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-user-template-error\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.298055 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.309446 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.310375 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.312016 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.312275 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-user-template-login\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.320758 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/40293355-4e78-4dc4-802c-cf38f2898c35-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.321217 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h82ml\" (UniqueName: \"kubernetes.io/projected/40293355-4e78-4dc4-802c-cf38f2898c35-kube-api-access-h82ml\") pod \"oauth-openshift-7b57696689-9qbqw\" (UID: \"40293355-4e78-4dc4-802c-cf38f2898c35\") " pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.502859 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.560422 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.684973 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6lrng" event={"ID":"1272d613-92f7-455a-80ec-00ed65aa20b9","Type":"ContainerDied","Data":"8611244b3fddd09a7e2508658b7149783db1c0091574c5c435e0a5b912f0a7ba"} Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.685074 4768 scope.go:117] "RemoveContainer" containerID="844f771e8f614d6bd071bd5c5da5342833e42ac4a611e6bc9dfc26de84b77557" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.685169 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6lrng" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.706021 4768 scope.go:117] "RemoveContainer" containerID="edde0223f1ef6a4122dcd7f5b7bd926ec1c50d0ad6bc646f8777bcf1dd447d3c" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.721032 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6lrng"] Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.722605 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6lrng"] Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.742201 4768 scope.go:117] "RemoveContainer" containerID="aea322ec6cd9015151e9cadec0d6ce6a7f75300ff9a99f2e6fd40976de64b302" Feb 23 18:37:34 crc kubenswrapper[4768]: I0223 18:37:34.791488 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b57696689-9qbqw"] Feb 23 18:37:34 crc kubenswrapper[4768]: W0223 18:37:34.799838 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40293355_4e78_4dc4_802c_cf38f2898c35.slice/crio-06df755023f037efb8abc5d1e23fe961c4adbf9b4025ea83f85f28d61ebd805f WatchSource:0}: Error finding container 06df755023f037efb8abc5d1e23fe961c4adbf9b4025ea83f85f28d61ebd805f: Status 404 returned error can't find the container with id 06df755023f037efb8abc5d1e23fe961c4adbf9b4025ea83f85f28d61ebd805f Feb 23 18:37:35 crc kubenswrapper[4768]: I0223 18:37:35.327464 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1272d613-92f7-455a-80ec-00ed65aa20b9" path="/var/lib/kubelet/pods/1272d613-92f7-455a-80ec-00ed65aa20b9/volumes" Feb 23 18:37:35 crc kubenswrapper[4768]: I0223 18:37:35.696322 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" event={"ID":"40293355-4e78-4dc4-802c-cf38f2898c35","Type":"ContainerStarted","Data":"81452d93096ba9b641e510e465303d7c07fa36c8947b461f7b9941436dea13ed"} Feb 23 18:37:35 crc kubenswrapper[4768]: I0223 18:37:35.696376 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" event={"ID":"40293355-4e78-4dc4-802c-cf38f2898c35","Type":"ContainerStarted","Data":"06df755023f037efb8abc5d1e23fe961c4adbf9b4025ea83f85f28d61ebd805f"} Feb 23 18:37:35 crc kubenswrapper[4768]: I0223 18:37:35.696793 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:35 crc kubenswrapper[4768]: I0223 18:37:35.708192 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" Feb 23 18:37:35 crc kubenswrapper[4768]: I0223 18:37:35.735967 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7b57696689-9qbqw" podStartSLOduration=32.735931805 podStartE2EDuration="32.735931805s" podCreationTimestamp="2026-02-23 18:37:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:37:35.728688563 +0000 UTC m=+251.119174373" watchObservedRunningTime="2026-02-23 18:37:35.735931805 +0000 UTC m=+251.126417695" Feb 23 18:37:39 crc kubenswrapper[4768]: I0223 18:37:39.545583 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:37:39 crc kubenswrapper[4768]: I0223 18:37:39.545919 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.206333 4768 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.207763 4768 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.208059 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a" gracePeriod=15 Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.208236 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.208466 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645" gracePeriod=15 Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.208586 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86" gracePeriod=15 Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.208639 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f" gracePeriod=15 Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.208674 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131" gracePeriod=15 Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209012 4768 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 18:37:51 crc kubenswrapper[4768]: E0223 18:37:51.209149 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209160 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 18:37:51 crc kubenswrapper[4768]: E0223 18:37:51.209169 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209175 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: E0223 18:37:51.209184 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209190 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: E0223 18:37:51.209197 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209202 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: E0223 18:37:51.209210 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209215 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 18:37:51 crc kubenswrapper[4768]: E0223 18:37:51.209431 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209440 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 18:37:51 crc kubenswrapper[4768]: E0223 18:37:51.209453 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209459 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 18:37:51 crc kubenswrapper[4768]: E0223 18:37:51.209469 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209475 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209568 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209578 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209594 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209602 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209634 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209641 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209648 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.209654 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: E0223 18:37:51.210500 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.210514 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: E0223 18:37:51.210524 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.210530 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.210710 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.365674 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.366225 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.366305 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.366335 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.366395 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.366412 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.366432 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.366610 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.468430 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.468599 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.468603 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.468674 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.468680 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.468780 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.468865 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.468977 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.469009 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.469067 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.468788 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.469215 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.469377 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.469400 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.469375 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.469380 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.816497 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.818649 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.819294 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645" exitCode=0 Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.819323 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86" exitCode=0 Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.819337 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f" exitCode=0 Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.819348 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131" exitCode=2 Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.819450 4768 scope.go:117] "RemoveContainer" containerID="70c2826b9e06b2c1bf2d95e5a2260c526785228bcc998cc32c2e91ee6e0f87f6" Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.821379 4768 generic.go:334] "Generic (PLEG): container finished" podID="34c3f34f-e575-4f6b-a730-b27b0e522912" containerID="f27ce67eebb95d5a0b14fd323a581e2949d18cb6abd8603cab4a6fc595b5f815" exitCode=0 Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.821444 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"34c3f34f-e575-4f6b-a730-b27b0e522912","Type":"ContainerDied","Data":"f27ce67eebb95d5a0b14fd323a581e2949d18cb6abd8603cab4a6fc595b5f815"} Feb 23 18:37:51 crc kubenswrapper[4768]: I0223 18:37:51.823483 4768 status_manager.go:851] "Failed to get status for pod" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:52 crc kubenswrapper[4768]: I0223 18:37:52.830953 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.066850 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.067609 4768 status_manager.go:851] "Failed to get status for pod" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.194943 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34c3f34f-e575-4f6b-a730-b27b0e522912-kube-api-access\") pod \"34c3f34f-e575-4f6b-a730-b27b0e522912\" (UID: \"34c3f34f-e575-4f6b-a730-b27b0e522912\") " Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.194996 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34c3f34f-e575-4f6b-a730-b27b0e522912-kubelet-dir\") pod \"34c3f34f-e575-4f6b-a730-b27b0e522912\" (UID: \"34c3f34f-e575-4f6b-a730-b27b0e522912\") " Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.195014 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34c3f34f-e575-4f6b-a730-b27b0e522912-var-lock\") pod \"34c3f34f-e575-4f6b-a730-b27b0e522912\" (UID: \"34c3f34f-e575-4f6b-a730-b27b0e522912\") " Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.195105 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c3f34f-e575-4f6b-a730-b27b0e522912-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "34c3f34f-e575-4f6b-a730-b27b0e522912" (UID: "34c3f34f-e575-4f6b-a730-b27b0e522912"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.195138 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34c3f34f-e575-4f6b-a730-b27b0e522912-var-lock" (OuterVolumeSpecName: "var-lock") pod "34c3f34f-e575-4f6b-a730-b27b0e522912" (UID: "34c3f34f-e575-4f6b-a730-b27b0e522912"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.195176 4768 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34c3f34f-e575-4f6b-a730-b27b0e522912-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.204468 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c3f34f-e575-4f6b-a730-b27b0e522912-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "34c3f34f-e575-4f6b-a730-b27b0e522912" (UID: "34c3f34f-e575-4f6b-a730-b27b0e522912"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.296278 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34c3f34f-e575-4f6b-a730-b27b0e522912-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.296778 4768 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34c3f34f-e575-4f6b-a730-b27b0e522912-var-lock\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.842315 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"34c3f34f-e575-4f6b-a730-b27b0e522912","Type":"ContainerDied","Data":"484ec3ac75ca6f6b8923137752f174cf10ab3149fe50afb0e20e753aa4b0c466"} Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.842720 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="484ec3ac75ca6f6b8923137752f174cf10ab3149fe50afb0e20e753aa4b0c466" Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.843219 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.848273 4768 status_manager.go:851] "Failed to get status for pod" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.850515 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 18:37:53 crc kubenswrapper[4768]: I0223 18:37:53.851941 4768 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a" exitCode=0 Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.080956 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.082420 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.083444 4768 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.083951 4768 status_manager.go:851] "Failed to get status for pod" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.209590 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.209738 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.210112 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.210195 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.210388 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.210543 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.210656 4768 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.210732 4768 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.312354 4768 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.865709 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.868738 4768 scope.go:117] "RemoveContainer" containerID="013d84bc66eee890469e1bcbf699bf087ce78c2d084fa6c11e0ac7046cd4a645" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.869036 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.899667 4768 scope.go:117] "RemoveContainer" containerID="cc414f688ee3e48620eb74e9c5486a08558fa218b46c13ccc0e4cdf08a2ebb86" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.903404 4768 status_manager.go:851] "Failed to get status for pod" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.903831 4768 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.923402 4768 scope.go:117] "RemoveContainer" containerID="98829a6a60e3ea0d1de74948edc4c66fb879a72f9ba319e5d5233dffd3601b5f" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.941881 4768 scope.go:117] "RemoveContainer" containerID="2bc665ee12b3b5a67635dccecc8dc4f6aa34e331b1459a6494554872c6427131" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.954601 4768 scope.go:117] "RemoveContainer" containerID="1becbefd00d715f8adbddfae08b45fefed5a0e0060eb56ed008ea5dd9992c22a" Feb 23 18:37:54 crc kubenswrapper[4768]: I0223 18:37:54.970238 4768 scope.go:117] "RemoveContainer" containerID="c491e99fb12af86f01feb4fc13927d784fd76009f1aa3c83cb394356f7537e2d" Feb 23 18:37:55 crc kubenswrapper[4768]: I0223 18:37:55.312139 4768 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:55 crc kubenswrapper[4768]: I0223 18:37:55.312382 4768 status_manager.go:851] "Failed to get status for pod" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:55 crc kubenswrapper[4768]: I0223 18:37:55.316729 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 23 18:37:56 crc kubenswrapper[4768]: E0223 18:37:56.246305 4768 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.115:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:56 crc kubenswrapper[4768]: I0223 18:37:56.246809 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:56 crc kubenswrapper[4768]: E0223 18:37:56.285240 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.115:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896f4156d68eb6c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 18:37:56.28395198 +0000 UTC m=+271.674437780,LastTimestamp:2026-02-23 18:37:56.28395198 +0000 UTC m=+271.674437780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 18:37:56 crc kubenswrapper[4768]: E0223 18:37:56.622697 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.115:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896f4156d68eb6c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 18:37:56.28395198 +0000 UTC m=+271.674437780,LastTimestamp:2026-02-23 18:37:56.28395198 +0000 UTC m=+271.674437780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 18:37:56 crc kubenswrapper[4768]: I0223 18:37:56.883051 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bf73ee3769b963fc2f739aced001631e04b7bdcb51a581da1fc65e99d563ceca"} Feb 23 18:37:56 crc kubenswrapper[4768]: I0223 18:37:56.883539 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d3131e3bf41e0f7e4f35f94265914cdb27fd31e504852eaf67af2d7307891113"} Feb 23 18:37:56 crc kubenswrapper[4768]: E0223 18:37:56.884141 4768 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.115:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:37:56 crc kubenswrapper[4768]: I0223 18:37:56.884329 4768 status_manager.go:851] "Failed to get status for pod" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:58 crc kubenswrapper[4768]: E0223 18:37:58.872150 4768 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:58 crc kubenswrapper[4768]: E0223 18:37:58.873031 4768 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:58 crc kubenswrapper[4768]: E0223 18:37:58.874196 4768 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:58 crc kubenswrapper[4768]: E0223 18:37:58.874745 4768 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:58 crc kubenswrapper[4768]: E0223 18:37:58.875303 4768 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:37:58 crc kubenswrapper[4768]: I0223 18:37:58.875363 4768 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 23 18:37:58 crc kubenswrapper[4768]: E0223 18:37:58.875831 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="200ms" Feb 23 18:37:59 crc kubenswrapper[4768]: E0223 18:37:59.077493 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="400ms" Feb 23 18:37:59 crc kubenswrapper[4768]: E0223 18:37:59.478570 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="800ms" Feb 23 18:38:00 crc kubenswrapper[4768]: E0223 18:38:00.280414 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="1.6s" Feb 23 18:38:01 crc kubenswrapper[4768]: E0223 18:38:01.881687 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="3.2s" Feb 23 18:38:04 crc kubenswrapper[4768]: I0223 18:38:04.950713 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 18:38:04 crc kubenswrapper[4768]: I0223 18:38:04.952427 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 18:38:04 crc kubenswrapper[4768]: I0223 18:38:04.952498 4768 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54" exitCode=1 Feb 23 18:38:04 crc kubenswrapper[4768]: I0223 18:38:04.952592 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54"} Feb 23 18:38:04 crc kubenswrapper[4768]: I0223 18:38:04.953780 4768 scope.go:117] "RemoveContainer" containerID="3d8103b14dbbe12fbc648b89a44fcc342d3d4af61e09a862b26f5e4e0d701f54" Feb 23 18:38:04 crc kubenswrapper[4768]: I0223 18:38:04.954041 4768 status_manager.go:851] "Failed to get status for pod" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:38:04 crc kubenswrapper[4768]: I0223 18:38:04.954887 4768 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:38:05 crc kubenswrapper[4768]: E0223 18:38:05.083474 4768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.115:6443: connect: connection refused" interval="6.4s" Feb 23 18:38:05 crc kubenswrapper[4768]: I0223 18:38:05.175119 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:38:05 crc kubenswrapper[4768]: I0223 18:38:05.312141 4768 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:38:05 crc kubenswrapper[4768]: I0223 18:38:05.313193 4768 status_manager.go:851] "Failed to get status for pod" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:38:05 crc kubenswrapper[4768]: I0223 18:38:05.612517 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:38:05 crc kubenswrapper[4768]: I0223 18:38:05.961911 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Feb 23 18:38:05 crc kubenswrapper[4768]: I0223 18:38:05.963904 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 23 18:38:05 crc kubenswrapper[4768]: I0223 18:38:05.964023 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4df2290bc9f5af30a46de156601a83f96c45e2fdc56044f6a138d7d57afa150d"} Feb 23 18:38:05 crc kubenswrapper[4768]: I0223 18:38:05.965157 4768 status_manager.go:851] "Failed to get status for pod" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:38:05 crc kubenswrapper[4768]: I0223 18:38:05.966023 4768 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:38:06 crc kubenswrapper[4768]: I0223 18:38:06.306685 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:38:06 crc kubenswrapper[4768]: I0223 18:38:06.308674 4768 status_manager.go:851] "Failed to get status for pod" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:38:06 crc kubenswrapper[4768]: I0223 18:38:06.309726 4768 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:38:06 crc kubenswrapper[4768]: I0223 18:38:06.331052 4768 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3" Feb 23 18:38:06 crc kubenswrapper[4768]: I0223 18:38:06.331433 4768 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3" Feb 23 18:38:06 crc kubenswrapper[4768]: E0223 18:38:06.332525 4768 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:38:06 crc kubenswrapper[4768]: I0223 18:38:06.333314 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:38:06 crc kubenswrapper[4768]: W0223 18:38:06.372378 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-37256f034a3b780dd5fd56ca4f8cb407bf785b7d34f324aab2c0f50dcad39acb WatchSource:0}: Error finding container 37256f034a3b780dd5fd56ca4f8cb407bf785b7d34f324aab2c0f50dcad39acb: Status 404 returned error can't find the container with id 37256f034a3b780dd5fd56ca4f8cb407bf785b7d34f324aab2c0f50dcad39acb Feb 23 18:38:06 crc kubenswrapper[4768]: E0223 18:38:06.624140 4768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.115:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1896f4156d68eb6c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-23 18:37:56.28395198 +0000 UTC m=+271.674437780,LastTimestamp:2026-02-23 18:37:56.28395198 +0000 UTC m=+271.674437780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 23 18:38:06 crc kubenswrapper[4768]: I0223 18:38:06.974286 4768 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="9dc8f65143de32e594b55b222e8a1f1f323a0800a758197d4ed63663c44b16fc" exitCode=0 Feb 23 18:38:06 crc kubenswrapper[4768]: I0223 18:38:06.974375 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"9dc8f65143de32e594b55b222e8a1f1f323a0800a758197d4ed63663c44b16fc"} Feb 23 18:38:06 crc kubenswrapper[4768]: I0223 18:38:06.974752 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"37256f034a3b780dd5fd56ca4f8cb407bf785b7d34f324aab2c0f50dcad39acb"} Feb 23 18:38:06 crc kubenswrapper[4768]: I0223 18:38:06.975333 4768 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3" Feb 23 18:38:06 crc kubenswrapper[4768]: I0223 18:38:06.975360 4768 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3" Feb 23 18:38:06 crc kubenswrapper[4768]: E0223 18:38:06.976126 4768 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:38:06 crc kubenswrapper[4768]: I0223 18:38:06.976399 4768 status_manager.go:851] "Failed to get status for pod" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:38:06 crc kubenswrapper[4768]: I0223 18:38:06.976857 4768 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.115:6443: connect: connection refused" Feb 23 18:38:07 crc kubenswrapper[4768]: I0223 18:38:07.981906 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"48a0e7eb31ba426adae428bd94db0cbfc3455d08e2a45aaebda96707739e5b6a"} Feb 23 18:38:07 crc kubenswrapper[4768]: I0223 18:38:07.983033 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"65e732a01e6f14c1fc320edf51a06c1cfdb47d3bec939e5d869649157d4905cf"} Feb 23 18:38:07 crc kubenswrapper[4768]: I0223 18:38:07.983059 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e7261ccf75a1141ce17b996a877393967c2f6002c3b78385974ee7479f6ac955"} Feb 23 18:38:08 crc kubenswrapper[4768]: I0223 18:38:08.995342 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f28ddb9c8675042d9c7e048844cb7a2e1f160719be61b81d59e332196e1566a4"} Feb 23 18:38:08 crc kubenswrapper[4768]: I0223 18:38:08.995888 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:38:08 crc kubenswrapper[4768]: I0223 18:38:08.995942 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cf3ef921c95d737f090a764b0ec1ba6ddccc5fbdc842d585256378e0851bf3c0"} Feb 23 18:38:08 crc kubenswrapper[4768]: I0223 18:38:08.995783 4768 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3" Feb 23 18:38:08 crc kubenswrapper[4768]: I0223 18:38:08.995982 4768 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3" Feb 23 18:38:09 crc kubenswrapper[4768]: I0223 18:38:09.546013 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:38:09 crc kubenswrapper[4768]: I0223 18:38:09.546600 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:38:09 crc kubenswrapper[4768]: I0223 18:38:09.546892 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:38:09 crc kubenswrapper[4768]: I0223 18:38:09.548111 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:38:09 crc kubenswrapper[4768]: I0223 18:38:09.548461 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f" gracePeriod=600 Feb 23 18:38:10 crc kubenswrapper[4768]: I0223 18:38:10.003217 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f"} Feb 23 18:38:10 crc kubenswrapper[4768]: I0223 18:38:10.003265 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f" exitCode=0 Feb 23 18:38:10 crc kubenswrapper[4768]: I0223 18:38:10.005468 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"adca3adca094384b182d667ac8baf056e7660628e81b045a9a497d28c2962b81"} Feb 23 18:38:11 crc kubenswrapper[4768]: I0223 18:38:11.333491 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:38:11 crc kubenswrapper[4768]: I0223 18:38:11.333932 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:38:11 crc kubenswrapper[4768]: I0223 18:38:11.341591 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:38:14 crc kubenswrapper[4768]: I0223 18:38:14.100312 4768 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:38:15 crc kubenswrapper[4768]: I0223 18:38:15.048457 4768 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3" Feb 23 18:38:15 crc kubenswrapper[4768]: I0223 18:38:15.048821 4768 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3" Feb 23 18:38:15 crc kubenswrapper[4768]: I0223 18:38:15.055392 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:38:15 crc kubenswrapper[4768]: I0223 18:38:15.176174 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:38:15 crc kubenswrapper[4768]: I0223 18:38:15.198676 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:38:15 crc kubenswrapper[4768]: I0223 18:38:15.328734 4768 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1901846a-de96-4137-b2df-a78b1e72b4fd" Feb 23 18:38:15 crc kubenswrapper[4768]: I0223 18:38:15.611665 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:38:15 crc kubenswrapper[4768]: I0223 18:38:15.618345 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 23 18:38:16 crc kubenswrapper[4768]: I0223 18:38:16.056675 4768 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3" Feb 23 18:38:16 crc kubenswrapper[4768]: I0223 18:38:16.056732 4768 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="05f06f3d-7cdc-4f49-8ed8-8d02c50d25c3" Feb 23 18:38:16 crc kubenswrapper[4768]: I0223 18:38:16.061239 4768 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="1901846a-de96-4137-b2df-a78b1e72b4fd" Feb 23 18:38:24 crc kubenswrapper[4768]: I0223 18:38:24.029132 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 23 18:38:24 crc kubenswrapper[4768]: I0223 18:38:24.536134 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 23 18:38:24 crc kubenswrapper[4768]: I0223 18:38:24.665591 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 18:38:25 crc kubenswrapper[4768]: I0223 18:38:25.048876 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 23 18:38:25 crc kubenswrapper[4768]: I0223 18:38:25.088332 4768 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 23 18:38:25 crc kubenswrapper[4768]: I0223 18:38:25.220993 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 23 18:38:25 crc kubenswrapper[4768]: I0223 18:38:25.506710 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 23 18:38:25 crc kubenswrapper[4768]: I0223 18:38:25.548363 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 23 18:38:25 crc kubenswrapper[4768]: I0223 18:38:25.996340 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.075161 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.086035 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.263700 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.456484 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.547692 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.602947 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.725581 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.742224 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.772696 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.813838 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.833797 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.942956 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.965993 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 23 18:38:26 crc kubenswrapper[4768]: I0223 18:38:26.974994 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 23 18:38:27 crc kubenswrapper[4768]: I0223 18:38:27.244990 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 23 18:38:27 crc kubenswrapper[4768]: I0223 18:38:27.259191 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 23 18:38:27 crc kubenswrapper[4768]: I0223 18:38:27.361083 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 23 18:38:27 crc kubenswrapper[4768]: I0223 18:38:27.422327 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 23 18:38:27 crc kubenswrapper[4768]: I0223 18:38:27.476211 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 23 18:38:27 crc kubenswrapper[4768]: I0223 18:38:27.619028 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 23 18:38:27 crc kubenswrapper[4768]: I0223 18:38:27.751875 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 23 18:38:27 crc kubenswrapper[4768]: I0223 18:38:27.811562 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 23 18:38:27 crc kubenswrapper[4768]: I0223 18:38:27.887054 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 23 18:38:27 crc kubenswrapper[4768]: I0223 18:38:27.970195 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.031484 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.081214 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.130653 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.163871 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.186451 4768 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.226468 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.392916 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.542236 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.570387 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.576078 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.587337 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.605434 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.632399 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.634449 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.662773 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.666125 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.707659 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.738140 4768 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.746325 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.746391 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.746432 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xtmth","openshift-marketplace/redhat-marketplace-jt29q","openshift-marketplace/certified-operators-jslxc","openshift-marketplace/marketplace-operator-79b997595-r7fm5","openshift-marketplace/redhat-operators-52sbl"] Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.746806 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-52sbl" podUID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" containerName="registry-server" containerID="cri-o://f90a09537949dfe88e7b117083c7035e4dcad874c64bce7b2f4778f5bb7c706a" gracePeriod=30 Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.747471 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" podUID="3cff9f42-aeae-4c76-a542-75cc5c37254a" containerName="marketplace-operator" containerID="cri-o://b3ff17088e7daa77067d73e5e6d823f55e6cc109603a480911a8ccdc188f0b4a" gracePeriod=30 Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.747816 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jslxc" podUID="ed08d934-3f52-47e6-89a0-16d5481ac4bd" containerName="registry-server" containerID="cri-o://6ed8f4081a49b862b22aaaaeb6b455274586f8358d250694775d24abe1b40700" gracePeriod=30 Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.748474 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xtmth" podUID="09dd656a-5018-48c3-b1ca-0318e0de4161" containerName="registry-server" containerID="cri-o://9a6d1d5770bad3e379ce87370a5731f36b74c7149d5ba5a19356c8f143e8eadf" gracePeriod=30 Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.748583 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jt29q" podUID="b1f0d482-3b79-4272-bd0a-976fd8053576" containerName="registry-server" containerID="cri-o://6d244d56e25d68bca5ead99594521b69d11be655677ea2adb3fa711d30dc0566" gracePeriod=30 Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.809731 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.813435 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.813416573 podStartE2EDuration="14.813416573s" podCreationTimestamp="2026-02-23 18:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:38:28.809781983 +0000 UTC m=+304.200267783" watchObservedRunningTime="2026-02-23 18:38:28.813416573 +0000 UTC m=+304.203902373" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.848369 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.871081 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.909706 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.926840 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.973763 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 23 18:38:28 crc kubenswrapper[4768]: I0223 18:38:28.982368 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.041126 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.088196 4768 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.121949 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.140265 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.151292 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.152124 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed08d934-3f52-47e6-89a0-16d5481ac4bd" containerID="6ed8f4081a49b862b22aaaaeb6b455274586f8358d250694775d24abe1b40700" exitCode=0 Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.152198 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jslxc" event={"ID":"ed08d934-3f52-47e6-89a0-16d5481ac4bd","Type":"ContainerDied","Data":"6ed8f4081a49b862b22aaaaeb6b455274586f8358d250694775d24abe1b40700"} Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.152233 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jslxc" event={"ID":"ed08d934-3f52-47e6-89a0-16d5481ac4bd","Type":"ContainerDied","Data":"3c4c9f7a34b7d537849a5ae4376d060ea3b336ab8859fe62aee2e040531835d7"} Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.152277 4768 scope.go:117] "RemoveContainer" containerID="6ed8f4081a49b862b22aaaaeb6b455274586f8358d250694775d24abe1b40700" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.155642 4768 generic.go:334] "Generic (PLEG): container finished" podID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" containerID="f90a09537949dfe88e7b117083c7035e4dcad874c64bce7b2f4778f5bb7c706a" exitCode=0 Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.155696 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52sbl" event={"ID":"80aa487f-1e02-4a14-88da-a96a5f2a8f07","Type":"ContainerDied","Data":"f90a09537949dfe88e7b117083c7035e4dcad874c64bce7b2f4778f5bb7c706a"} Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.161326 4768 generic.go:334] "Generic (PLEG): container finished" podID="09dd656a-5018-48c3-b1ca-0318e0de4161" containerID="9a6d1d5770bad3e379ce87370a5731f36b74c7149d5ba5a19356c8f143e8eadf" exitCode=0 Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.161385 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtmth" event={"ID":"09dd656a-5018-48c3-b1ca-0318e0de4161","Type":"ContainerDied","Data":"9a6d1d5770bad3e379ce87370a5731f36b74c7149d5ba5a19356c8f143e8eadf"} Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.162377 4768 generic.go:334] "Generic (PLEG): container finished" podID="3cff9f42-aeae-4c76-a542-75cc5c37254a" containerID="b3ff17088e7daa77067d73e5e6d823f55e6cc109603a480911a8ccdc188f0b4a" exitCode=0 Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.162446 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" event={"ID":"3cff9f42-aeae-4c76-a542-75cc5c37254a","Type":"ContainerDied","Data":"b3ff17088e7daa77067d73e5e6d823f55e6cc109603a480911a8ccdc188f0b4a"} Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.163989 4768 generic.go:334] "Generic (PLEG): container finished" podID="b1f0d482-3b79-4272-bd0a-976fd8053576" containerID="6d244d56e25d68bca5ead99594521b69d11be655677ea2adb3fa711d30dc0566" exitCode=0 Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.164011 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt29q" event={"ID":"b1f0d482-3b79-4272-bd0a-976fd8053576","Type":"ContainerDied","Data":"6d244d56e25d68bca5ead99594521b69d11be655677ea2adb3fa711d30dc0566"} Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.183318 4768 scope.go:117] "RemoveContainer" containerID="cf1eea9e60ddc50e6495b08d854cd64ba62973a609b2663f60d8afa3500f903f" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.208812 4768 scope.go:117] "RemoveContainer" containerID="265cf0320f1c8a4215f9a4496eca215206ec7ae759aadcd55e279e5331b76323" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.243533 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.244054 4768 scope.go:117] "RemoveContainer" containerID="6ed8f4081a49b862b22aaaaeb6b455274586f8358d250694775d24abe1b40700" Feb 23 18:38:29 crc kubenswrapper[4768]: E0223 18:38:29.244320 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed8f4081a49b862b22aaaaeb6b455274586f8358d250694775d24abe1b40700\": container with ID starting with 6ed8f4081a49b862b22aaaaeb6b455274586f8358d250694775d24abe1b40700 not found: ID does not exist" containerID="6ed8f4081a49b862b22aaaaeb6b455274586f8358d250694775d24abe1b40700" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.244371 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed8f4081a49b862b22aaaaeb6b455274586f8358d250694775d24abe1b40700"} err="failed to get container status \"6ed8f4081a49b862b22aaaaeb6b455274586f8358d250694775d24abe1b40700\": rpc error: code = NotFound desc = could not find container \"6ed8f4081a49b862b22aaaaeb6b455274586f8358d250694775d24abe1b40700\": container with ID starting with 6ed8f4081a49b862b22aaaaeb6b455274586f8358d250694775d24abe1b40700 not found: ID does not exist" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.244400 4768 scope.go:117] "RemoveContainer" containerID="cf1eea9e60ddc50e6495b08d854cd64ba62973a609b2663f60d8afa3500f903f" Feb 23 18:38:29 crc kubenswrapper[4768]: E0223 18:38:29.244930 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf1eea9e60ddc50e6495b08d854cd64ba62973a609b2663f60d8afa3500f903f\": container with ID starting with cf1eea9e60ddc50e6495b08d854cd64ba62973a609b2663f60d8afa3500f903f not found: ID does not exist" containerID="cf1eea9e60ddc50e6495b08d854cd64ba62973a609b2663f60d8afa3500f903f" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.244968 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf1eea9e60ddc50e6495b08d854cd64ba62973a609b2663f60d8afa3500f903f"} err="failed to get container status \"cf1eea9e60ddc50e6495b08d854cd64ba62973a609b2663f60d8afa3500f903f\": rpc error: code = NotFound desc = could not find container \"cf1eea9e60ddc50e6495b08d854cd64ba62973a609b2663f60d8afa3500f903f\": container with ID starting with cf1eea9e60ddc50e6495b08d854cd64ba62973a609b2663f60d8afa3500f903f not found: ID does not exist" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.244989 4768 scope.go:117] "RemoveContainer" containerID="265cf0320f1c8a4215f9a4496eca215206ec7ae759aadcd55e279e5331b76323" Feb 23 18:38:29 crc kubenswrapper[4768]: E0223 18:38:29.245429 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"265cf0320f1c8a4215f9a4496eca215206ec7ae759aadcd55e279e5331b76323\": container with ID starting with 265cf0320f1c8a4215f9a4496eca215206ec7ae759aadcd55e279e5331b76323 not found: ID does not exist" containerID="265cf0320f1c8a4215f9a4496eca215206ec7ae759aadcd55e279e5331b76323" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.245533 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"265cf0320f1c8a4215f9a4496eca215206ec7ae759aadcd55e279e5331b76323"} err="failed to get container status \"265cf0320f1c8a4215f9a4496eca215206ec7ae759aadcd55e279e5331b76323\": rpc error: code = NotFound desc = could not find container \"265cf0320f1c8a4215f9a4496eca215206ec7ae759aadcd55e279e5331b76323\": container with ID starting with 265cf0320f1c8a4215f9a4496eca215206ec7ae759aadcd55e279e5331b76323 not found: ID does not exist" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.268301 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.282821 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.283075 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.298438 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.314756 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.342637 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed08d934-3f52-47e6-89a0-16d5481ac4bd-catalog-content\") pod \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\" (UID: \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.342977 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgc6z\" (UniqueName: \"kubernetes.io/projected/ed08d934-3f52-47e6-89a0-16d5481ac4bd-kube-api-access-dgc6z\") pod \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\" (UID: \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.343315 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09dd656a-5018-48c3-b1ca-0318e0de4161-utilities\") pod \"09dd656a-5018-48c3-b1ca-0318e0de4161\" (UID: \"09dd656a-5018-48c3-b1ca-0318e0de4161\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.343583 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f0d482-3b79-4272-bd0a-976fd8053576-catalog-content\") pod \"b1f0d482-3b79-4272-bd0a-976fd8053576\" (UID: \"b1f0d482-3b79-4272-bd0a-976fd8053576\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.343631 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3cff9f42-aeae-4c76-a542-75cc5c37254a-marketplace-trusted-ca\") pod \"3cff9f42-aeae-4c76-a542-75cc5c37254a\" (UID: \"3cff9f42-aeae-4c76-a542-75cc5c37254a\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.343656 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed08d934-3f52-47e6-89a0-16d5481ac4bd-utilities\") pod \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\" (UID: \"ed08d934-3f52-47e6-89a0-16d5481ac4bd\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.346196 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed08d934-3f52-47e6-89a0-16d5481ac4bd-utilities" (OuterVolumeSpecName: "utilities") pod "ed08d934-3f52-47e6-89a0-16d5481ac4bd" (UID: "ed08d934-3f52-47e6-89a0-16d5481ac4bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.347171 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09dd656a-5018-48c3-b1ca-0318e0de4161-utilities" (OuterVolumeSpecName: "utilities") pod "09dd656a-5018-48c3-b1ca-0318e0de4161" (UID: "09dd656a-5018-48c3-b1ca-0318e0de4161"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.347987 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cff9f42-aeae-4c76-a542-75cc5c37254a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3cff9f42-aeae-4c76-a542-75cc5c37254a" (UID: "3cff9f42-aeae-4c76-a542-75cc5c37254a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.354401 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed08d934-3f52-47e6-89a0-16d5481ac4bd-kube-api-access-dgc6z" (OuterVolumeSpecName: "kube-api-access-dgc6z") pod "ed08d934-3f52-47e6-89a0-16d5481ac4bd" (UID: "ed08d934-3f52-47e6-89a0-16d5481ac4bd"). InnerVolumeSpecName "kube-api-access-dgc6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.382279 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1f0d482-3b79-4272-bd0a-976fd8053576-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1f0d482-3b79-4272-bd0a-976fd8053576" (UID: "b1f0d482-3b79-4272-bd0a-976fd8053576"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.386030 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.397082 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.404062 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed08d934-3f52-47e6-89a0-16d5481ac4bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed08d934-3f52-47e6-89a0-16d5481ac4bd" (UID: "ed08d934-3f52-47e6-89a0-16d5481ac4bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.444606 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3cff9f42-aeae-4c76-a542-75cc5c37254a-marketplace-operator-metrics\") pod \"3cff9f42-aeae-4c76-a542-75cc5c37254a\" (UID: \"3cff9f42-aeae-4c76-a542-75cc5c37254a\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.445172 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f0d482-3b79-4272-bd0a-976fd8053576-utilities\") pod \"b1f0d482-3b79-4272-bd0a-976fd8053576\" (UID: \"b1f0d482-3b79-4272-bd0a-976fd8053576\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.445322 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrfhs\" (UniqueName: \"kubernetes.io/projected/80aa487f-1e02-4a14-88da-a96a5f2a8f07-kube-api-access-rrfhs\") pod \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\" (UID: \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.445445 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bcmg\" (UniqueName: \"kubernetes.io/projected/3cff9f42-aeae-4c76-a542-75cc5c37254a-kube-api-access-6bcmg\") pod \"3cff9f42-aeae-4c76-a542-75cc5c37254a\" (UID: \"3cff9f42-aeae-4c76-a542-75cc5c37254a\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.445567 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlm9d\" (UniqueName: \"kubernetes.io/projected/09dd656a-5018-48c3-b1ca-0318e0de4161-kube-api-access-tlm9d\") pod \"09dd656a-5018-48c3-b1ca-0318e0de4161\" (UID: \"09dd656a-5018-48c3-b1ca-0318e0de4161\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.445726 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80aa487f-1e02-4a14-88da-a96a5f2a8f07-utilities\") pod \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\" (UID: \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.445799 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80aa487f-1e02-4a14-88da-a96a5f2a8f07-catalog-content\") pod \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\" (UID: \"80aa487f-1e02-4a14-88da-a96a5f2a8f07\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.445888 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl45n\" (UniqueName: \"kubernetes.io/projected/b1f0d482-3b79-4272-bd0a-976fd8053576-kube-api-access-nl45n\") pod \"b1f0d482-3b79-4272-bd0a-976fd8053576\" (UID: \"b1f0d482-3b79-4272-bd0a-976fd8053576\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.446001 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09dd656a-5018-48c3-b1ca-0318e0de4161-catalog-content\") pod \"09dd656a-5018-48c3-b1ca-0318e0de4161\" (UID: \"09dd656a-5018-48c3-b1ca-0318e0de4161\") " Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.446343 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed08d934-3f52-47e6-89a0-16d5481ac4bd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.446415 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80aa487f-1e02-4a14-88da-a96a5f2a8f07-utilities" (OuterVolumeSpecName: "utilities") pod "80aa487f-1e02-4a14-88da-a96a5f2a8f07" (UID: "80aa487f-1e02-4a14-88da-a96a5f2a8f07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.446430 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgc6z\" (UniqueName: \"kubernetes.io/projected/ed08d934-3f52-47e6-89a0-16d5481ac4bd-kube-api-access-dgc6z\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.446508 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09dd656a-5018-48c3-b1ca-0318e0de4161-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.446521 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1f0d482-3b79-4272-bd0a-976fd8053576-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.446533 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3cff9f42-aeae-4c76-a542-75cc5c37254a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.446546 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed08d934-3f52-47e6-89a0-16d5481ac4bd-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.448000 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1f0d482-3b79-4272-bd0a-976fd8053576-utilities" (OuterVolumeSpecName: "utilities") pod "b1f0d482-3b79-4272-bd0a-976fd8053576" (UID: "b1f0d482-3b79-4272-bd0a-976fd8053576"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.448964 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09dd656a-5018-48c3-b1ca-0318e0de4161-kube-api-access-tlm9d" (OuterVolumeSpecName: "kube-api-access-tlm9d") pod "09dd656a-5018-48c3-b1ca-0318e0de4161" (UID: "09dd656a-5018-48c3-b1ca-0318e0de4161"). InnerVolumeSpecName "kube-api-access-tlm9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.449141 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1f0d482-3b79-4272-bd0a-976fd8053576-kube-api-access-nl45n" (OuterVolumeSpecName: "kube-api-access-nl45n") pod "b1f0d482-3b79-4272-bd0a-976fd8053576" (UID: "b1f0d482-3b79-4272-bd0a-976fd8053576"). InnerVolumeSpecName "kube-api-access-nl45n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.449193 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cff9f42-aeae-4c76-a542-75cc5c37254a-kube-api-access-6bcmg" (OuterVolumeSpecName: "kube-api-access-6bcmg") pod "3cff9f42-aeae-4c76-a542-75cc5c37254a" (UID: "3cff9f42-aeae-4c76-a542-75cc5c37254a"). InnerVolumeSpecName "kube-api-access-6bcmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.449823 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80aa487f-1e02-4a14-88da-a96a5f2a8f07-kube-api-access-rrfhs" (OuterVolumeSpecName: "kube-api-access-rrfhs") pod "80aa487f-1e02-4a14-88da-a96a5f2a8f07" (UID: "80aa487f-1e02-4a14-88da-a96a5f2a8f07"). InnerVolumeSpecName "kube-api-access-rrfhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.451712 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cff9f42-aeae-4c76-a542-75cc5c37254a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3cff9f42-aeae-4c76-a542-75cc5c37254a" (UID: "3cff9f42-aeae-4c76-a542-75cc5c37254a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.452411 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.502909 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09dd656a-5018-48c3-b1ca-0318e0de4161-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09dd656a-5018-48c3-b1ca-0318e0de4161" (UID: "09dd656a-5018-48c3-b1ca-0318e0de4161"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.507282 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.528184 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.537522 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.548162 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlm9d\" (UniqueName: \"kubernetes.io/projected/09dd656a-5018-48c3-b1ca-0318e0de4161-kube-api-access-tlm9d\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.548222 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80aa487f-1e02-4a14-88da-a96a5f2a8f07-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.548239 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl45n\" (UniqueName: \"kubernetes.io/projected/b1f0d482-3b79-4272-bd0a-976fd8053576-kube-api-access-nl45n\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.548274 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09dd656a-5018-48c3-b1ca-0318e0de4161-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.548290 4768 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3cff9f42-aeae-4c76-a542-75cc5c37254a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.548303 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1f0d482-3b79-4272-bd0a-976fd8053576-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.548316 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrfhs\" (UniqueName: \"kubernetes.io/projected/80aa487f-1e02-4a14-88da-a96a5f2a8f07-kube-api-access-rrfhs\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.548330 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bcmg\" (UniqueName: \"kubernetes.io/projected/3cff9f42-aeae-4c76-a542-75cc5c37254a-kube-api-access-6bcmg\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.548841 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.562768 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.610374 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.618220 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80aa487f-1e02-4a14-88da-a96a5f2a8f07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80aa487f-1e02-4a14-88da-a96a5f2a8f07" (UID: "80aa487f-1e02-4a14-88da-a96a5f2a8f07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.649616 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80aa487f-1e02-4a14-88da-a96a5f2a8f07-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.664796 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.745131 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.763682 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.836449 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 23 18:38:29 crc kubenswrapper[4768]: I0223 18:38:29.888384 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.082706 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.098799 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.120425 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.130503 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.131094 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.163714 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.176390 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" event={"ID":"3cff9f42-aeae-4c76-a542-75cc5c37254a","Type":"ContainerDied","Data":"15f76464bbb56246cd9b63990ebec3db8b69520b58dcfac7e6a7cad6f461b523"} Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.176516 4768 scope.go:117] "RemoveContainer" containerID="b3ff17088e7daa77067d73e5e6d823f55e6cc109603a480911a8ccdc188f0b4a" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.176602 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-r7fm5" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.219094 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jt29q" event={"ID":"b1f0d482-3b79-4272-bd0a-976fd8053576","Type":"ContainerDied","Data":"21ae9a21cc40b976c0d44b718423082d70c741cc72f1470df632e528fa926484"} Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.219506 4768 scope.go:117] "RemoveContainer" containerID="6d244d56e25d68bca5ead99594521b69d11be655677ea2adb3fa711d30dc0566" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.219753 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jt29q" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.221418 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jslxc" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.226278 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-52sbl" event={"ID":"80aa487f-1e02-4a14-88da-a96a5f2a8f07","Type":"ContainerDied","Data":"fcd8b9884f72738fc4bb598c9b32d915630a55419b0bfd7a5895daec6e2a43c5"} Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.226306 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-52sbl" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.229117 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtmth" event={"ID":"09dd656a-5018-48c3-b1ca-0318e0de4161","Type":"ContainerDied","Data":"f286661f7f7d73b55eb6893e605d1be9afa315959a772874357ae25a5b6f31fc"} Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.229293 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtmth" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.238327 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r7fm5"] Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.242865 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-r7fm5"] Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.255642 4768 scope.go:117] "RemoveContainer" containerID="a3e21b8cc9243173f71c05168270ea37c572449a782e53d6ac391bd4b894ca35" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.272619 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt29q"] Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.278075 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jt29q"] Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.288572 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jslxc"] Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.301315 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jslxc"] Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.301511 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.306454 4768 scope.go:117] "RemoveContainer" containerID="2a4f22896d95b21eee56e681ae06f68f7882643f88db06525007abedde741fe0" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.316307 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-52sbl"] Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.324137 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-52sbl"] Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.330736 4768 scope.go:117] "RemoveContainer" containerID="f90a09537949dfe88e7b117083c7035e4dcad874c64bce7b2f4778f5bb7c706a" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.337040 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xtmth"] Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.343388 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xtmth"] Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.345280 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.353749 4768 scope.go:117] "RemoveContainer" containerID="f31bbc669164470861dde11c129797c425c2d5a7b9aada31524bc43942092a33" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.361510 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.375216 4768 scope.go:117] "RemoveContainer" containerID="a8782ad90eb6f26623684d02e190cd5aa36f0be9ae2de8cb4448c1a2cabea3ab" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.395126 4768 scope.go:117] "RemoveContainer" containerID="9a6d1d5770bad3e379ce87370a5731f36b74c7149d5ba5a19356c8f143e8eadf" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.426380 4768 scope.go:117] "RemoveContainer" containerID="f9ba3ebaae18bd73add5b3d33bfd7e7d31d9b1774a9caa7b22ffcd53da3b68aa" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.448990 4768 scope.go:117] "RemoveContainer" containerID="b7e91aeb5088931f033d7a0735392b3ca7545d59375e8a128af918e13cce9300" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.454931 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.460098 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.491245 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.549166 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.637711 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.754787 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.766077 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.782717 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.885448 4768 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.905917 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.944534 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.962494 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 23 18:38:30 crc kubenswrapper[4768]: I0223 18:38:30.984218 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.132601 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.148922 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.161378 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.292626 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.323002 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09dd656a-5018-48c3-b1ca-0318e0de4161" path="/var/lib/kubelet/pods/09dd656a-5018-48c3-b1ca-0318e0de4161/volumes" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.324336 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cff9f42-aeae-4c76-a542-75cc5c37254a" path="/var/lib/kubelet/pods/3cff9f42-aeae-4c76-a542-75cc5c37254a/volumes" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.325317 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" path="/var/lib/kubelet/pods/80aa487f-1e02-4a14-88da-a96a5f2a8f07/volumes" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.327899 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1f0d482-3b79-4272-bd0a-976fd8053576" path="/var/lib/kubelet/pods/b1f0d482-3b79-4272-bd0a-976fd8053576/volumes" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.329537 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed08d934-3f52-47e6-89a0-16d5481ac4bd" path="/var/lib/kubelet/pods/ed08d934-3f52-47e6-89a0-16d5481ac4bd/volumes" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.434660 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.441414 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.484994 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.626445 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.660939 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.729841 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.741090 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.758725 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.793810 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.868463 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.948003 4768 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.959085 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.982370 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 23 18:38:31 crc kubenswrapper[4768]: I0223 18:38:31.990670 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.077788 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.121029 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.227764 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.242238 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.264033 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.273634 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.389655 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.403780 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.530880 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.543177 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.679790 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.795730 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.843852 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.877198 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.925944 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.932011 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.971662 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 23 18:38:32 crc kubenswrapper[4768]: I0223 18:38:32.977568 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.014649 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.066458 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.096647 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.105470 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.119077 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.141505 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.248769 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.284931 4768 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.297798 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.434640 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.444515 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.567605 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.608145 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.629040 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.629754 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.734642 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.792402 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.863964 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.864511 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.936568 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.939331 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.969427 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 23 18:38:33 crc kubenswrapper[4768]: I0223 18:38:33.980182 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.132892 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n6tjv"] Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133159 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" containerName="extract-content" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133174 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" containerName="extract-content" Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133187 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" containerName="installer" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133193 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" containerName="installer" Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133203 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed08d934-3f52-47e6-89a0-16d5481ac4bd" containerName="extract-utilities" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133210 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed08d934-3f52-47e6-89a0-16d5481ac4bd" containerName="extract-utilities" Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133219 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f0d482-3b79-4272-bd0a-976fd8053576" containerName="extract-utilities" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133226 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f0d482-3b79-4272-bd0a-976fd8053576" containerName="extract-utilities" Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133234 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" containerName="registry-server" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133246 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" containerName="registry-server" Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133257 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed08d934-3f52-47e6-89a0-16d5481ac4bd" containerName="extract-content" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133281 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed08d934-3f52-47e6-89a0-16d5481ac4bd" containerName="extract-content" Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133289 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09dd656a-5018-48c3-b1ca-0318e0de4161" containerName="registry-server" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133295 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="09dd656a-5018-48c3-b1ca-0318e0de4161" containerName="registry-server" Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133308 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f0d482-3b79-4272-bd0a-976fd8053576" containerName="registry-server" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133315 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f0d482-3b79-4272-bd0a-976fd8053576" containerName="registry-server" Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133323 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" containerName="extract-utilities" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133330 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" containerName="extract-utilities" Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133341 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cff9f42-aeae-4c76-a542-75cc5c37254a" containerName="marketplace-operator" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133347 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cff9f42-aeae-4c76-a542-75cc5c37254a" containerName="marketplace-operator" Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133357 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1f0d482-3b79-4272-bd0a-976fd8053576" containerName="extract-content" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133365 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1f0d482-3b79-4272-bd0a-976fd8053576" containerName="extract-content" Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133373 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09dd656a-5018-48c3-b1ca-0318e0de4161" containerName="extract-content" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133379 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="09dd656a-5018-48c3-b1ca-0318e0de4161" containerName="extract-content" Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133389 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09dd656a-5018-48c3-b1ca-0318e0de4161" containerName="extract-utilities" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133395 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="09dd656a-5018-48c3-b1ca-0318e0de4161" containerName="extract-utilities" Feb 23 18:38:34 crc kubenswrapper[4768]: E0223 18:38:34.133403 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed08d934-3f52-47e6-89a0-16d5481ac4bd" containerName="registry-server" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133409 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed08d934-3f52-47e6-89a0-16d5481ac4bd" containerName="registry-server" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133507 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c3f34f-e575-4f6b-a730-b27b0e522912" containerName="installer" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133517 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed08d934-3f52-47e6-89a0-16d5481ac4bd" containerName="registry-server" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133526 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="09dd656a-5018-48c3-b1ca-0318e0de4161" containerName="registry-server" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133534 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="80aa487f-1e02-4a14-88da-a96a5f2a8f07" containerName="registry-server" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133543 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1f0d482-3b79-4272-bd0a-976fd8053576" containerName="registry-server" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.133551 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cff9f42-aeae-4c76-a542-75cc5c37254a" containerName="marketplace-operator" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.134034 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.140189 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.140414 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.140661 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.140924 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.143980 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.145449 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n6tjv"] Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.226077 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.312153 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c25ac972-0ed9-475d-b506-222f90fe52f9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n6tjv\" (UID: \"c25ac972-0ed9-475d-b506-222f90fe52f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.312212 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c25ac972-0ed9-475d-b506-222f90fe52f9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n6tjv\" (UID: \"c25ac972-0ed9-475d-b506-222f90fe52f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.312239 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgx8z\" (UniqueName: \"kubernetes.io/projected/c25ac972-0ed9-475d-b506-222f90fe52f9-kube-api-access-pgx8z\") pod \"marketplace-operator-79b997595-n6tjv\" (UID: \"c25ac972-0ed9-475d-b506-222f90fe52f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.336733 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.412973 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgx8z\" (UniqueName: \"kubernetes.io/projected/c25ac972-0ed9-475d-b506-222f90fe52f9-kube-api-access-pgx8z\") pod \"marketplace-operator-79b997595-n6tjv\" (UID: \"c25ac972-0ed9-475d-b506-222f90fe52f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.413085 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c25ac972-0ed9-475d-b506-222f90fe52f9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n6tjv\" (UID: \"c25ac972-0ed9-475d-b506-222f90fe52f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.413135 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c25ac972-0ed9-475d-b506-222f90fe52f9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n6tjv\" (UID: \"c25ac972-0ed9-475d-b506-222f90fe52f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.415641 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c25ac972-0ed9-475d-b506-222f90fe52f9-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-n6tjv\" (UID: \"c25ac972-0ed9-475d-b506-222f90fe52f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.418701 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c25ac972-0ed9-475d-b506-222f90fe52f9-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-n6tjv\" (UID: \"c25ac972-0ed9-475d-b506-222f90fe52f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.421825 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.435217 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgx8z\" (UniqueName: \"kubernetes.io/projected/c25ac972-0ed9-475d-b506-222f90fe52f9-kube-api-access-pgx8z\") pod \"marketplace-operator-79b997595-n6tjv\" (UID: \"c25ac972-0ed9-475d-b506-222f90fe52f9\") " pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.451108 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.461505 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.467707 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.518007 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.538796 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.566048 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.596387 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.740283 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.767348 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-n6tjv"] Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.776867 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.837274 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.853136 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.910772 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 23 18:38:34 crc kubenswrapper[4768]: I0223 18:38:34.972807 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.004683 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.115110 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.152346 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.268117 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.268311 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" event={"ID":"c25ac972-0ed9-475d-b506-222f90fe52f9","Type":"ContainerStarted","Data":"2223a2ae8253e21f29f4f2b7547e2778f0ad4077673bf1c8d975373a25c7b961"} Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.268371 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" event={"ID":"c25ac972-0ed9-475d-b506-222f90fe52f9","Type":"ContainerStarted","Data":"eccfdbd48fc7b29c5001d1e007ce4be3704be4a18fd52d5f8b0942e0cc7110e5"} Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.268579 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.272841 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.289316 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-n6tjv" podStartSLOduration=8.289295112 podStartE2EDuration="8.289295112s" podCreationTimestamp="2026-02-23 18:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:38:35.28704212 +0000 UTC m=+310.677527920" watchObservedRunningTime="2026-02-23 18:38:35.289295112 +0000 UTC m=+310.679780912" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.294846 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.303976 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.384646 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.453369 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.494936 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.531166 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.571185 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.777479 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.777957 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.814273 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.933206 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 23 18:38:35 crc kubenswrapper[4768]: I0223 18:38:35.984372 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.001957 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.045841 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.048848 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.113318 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.143388 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.151663 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.225686 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.302476 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.338488 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.378964 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.478028 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.508302 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.515575 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.678849 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.767631 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 23 18:38:36 crc kubenswrapper[4768]: I0223 18:38:36.862311 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.030715 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.046705 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.083450 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.196992 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.405762 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.414056 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.501891 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.601823 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.626114 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.650573 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.697171 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.725572 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.804393 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.806055 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.824370 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.830694 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 23 18:38:37 crc kubenswrapper[4768]: I0223 18:38:37.982442 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.047881 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.146290 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.156927 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.167987 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.273115 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.341682 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.426844 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.467033 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.525462 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.567667 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.614276 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.832831 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.929445 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 23 18:38:38 crc kubenswrapper[4768]: I0223 18:38:38.978968 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 23 18:38:39 crc kubenswrapper[4768]: I0223 18:38:39.271437 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 23 18:38:39 crc kubenswrapper[4768]: I0223 18:38:39.391092 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 23 18:38:39 crc kubenswrapper[4768]: I0223 18:38:39.532653 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 23 18:38:39 crc kubenswrapper[4768]: I0223 18:38:39.548031 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 23 18:38:39 crc kubenswrapper[4768]: I0223 18:38:39.575273 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 23 18:38:39 crc kubenswrapper[4768]: I0223 18:38:39.660000 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 23 18:38:40 crc kubenswrapper[4768]: I0223 18:38:40.032982 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 23 18:38:40 crc kubenswrapper[4768]: I0223 18:38:40.297748 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 23 18:38:46 crc kubenswrapper[4768]: I0223 18:38:46.887598 4768 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 23 18:38:46 crc kubenswrapper[4768]: I0223 18:38:46.888864 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://bf73ee3769b963fc2f739aced001631e04b7bdcb51a581da1fc65e99d563ceca" gracePeriod=5 Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.378425 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.379159 4768 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="bf73ee3769b963fc2f739aced001631e04b7bdcb51a581da1fc65e99d563ceca" exitCode=137 Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.486367 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.486443 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.683465 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.684053 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.683668 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.684108 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.684570 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.684785 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.685011 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.684836 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.685070 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.685782 4768 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.685937 4768 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.693961 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.791409 4768 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.791493 4768 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:52 crc kubenswrapper[4768]: I0223 18:38:52.791517 4768 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 23 18:38:53 crc kubenswrapper[4768]: I0223 18:38:53.319531 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 23 18:38:53 crc kubenswrapper[4768]: I0223 18:38:53.388563 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 23 18:38:53 crc kubenswrapper[4768]: I0223 18:38:53.388648 4768 scope.go:117] "RemoveContainer" containerID="bf73ee3769b963fc2f739aced001631e04b7bdcb51a581da1fc65e99d563ceca" Feb 23 18:38:53 crc kubenswrapper[4768]: I0223 18:38:53.388778 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 23 18:38:59 crc kubenswrapper[4768]: I0223 18:38:59.521460 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.208831 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8fb885f79-rxl5w"] Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.210697 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" podUID="01af9d70-86d3-4601-a000-1344c4752671" containerName="controller-manager" containerID="cri-o://d0732a992483497898f300d86cdf8a50af1ad8383600ef6403a86dedc4b01015" gracePeriod=30 Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.307400 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92"] Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.308156 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" podUID="0142e5c4-e4b5-458f-9b5e-59458769788c" containerName="route-controller-manager" containerID="cri-o://afeb80715d3118076c0f90493e5fd72f8c64e30271ea407b7ecb1d5ec9d772d5" gracePeriod=30 Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.470529 4768 generic.go:334] "Generic (PLEG): container finished" podID="01af9d70-86d3-4601-a000-1344c4752671" containerID="d0732a992483497898f300d86cdf8a50af1ad8383600ef6403a86dedc4b01015" exitCode=0 Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.470586 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" event={"ID":"01af9d70-86d3-4601-a000-1344c4752671","Type":"ContainerDied","Data":"d0732a992483497898f300d86cdf8a50af1ad8383600ef6403a86dedc4b01015"} Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.666785 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.754708 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.802773 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-client-ca\") pod \"01af9d70-86d3-4601-a000-1344c4752671\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.803192 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-proxy-ca-bundles\") pod \"01af9d70-86d3-4601-a000-1344c4752671\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.803266 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5z4n\" (UniqueName: \"kubernetes.io/projected/0142e5c4-e4b5-458f-9b5e-59458769788c-kube-api-access-g5z4n\") pod \"0142e5c4-e4b5-458f-9b5e-59458769788c\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.803304 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01af9d70-86d3-4601-a000-1344c4752671-serving-cert\") pod \"01af9d70-86d3-4601-a000-1344c4752671\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.803358 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0142e5c4-e4b5-458f-9b5e-59458769788c-client-ca\") pod \"0142e5c4-e4b5-458f-9b5e-59458769788c\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.803431 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcv6d\" (UniqueName: \"kubernetes.io/projected/01af9d70-86d3-4601-a000-1344c4752671-kube-api-access-bcv6d\") pod \"01af9d70-86d3-4601-a000-1344c4752671\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.803464 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-config\") pod \"01af9d70-86d3-4601-a000-1344c4752671\" (UID: \"01af9d70-86d3-4601-a000-1344c4752671\") " Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.803495 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0142e5c4-e4b5-458f-9b5e-59458769788c-config\") pod \"0142e5c4-e4b5-458f-9b5e-59458769788c\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.803543 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0142e5c4-e4b5-458f-9b5e-59458769788c-serving-cert\") pod \"0142e5c4-e4b5-458f-9b5e-59458769788c\" (UID: \"0142e5c4-e4b5-458f-9b5e-59458769788c\") " Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.803615 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "01af9d70-86d3-4601-a000-1344c4752671" (UID: "01af9d70-86d3-4601-a000-1344c4752671"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.803815 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.804185 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-config" (OuterVolumeSpecName: "config") pod "01af9d70-86d3-4601-a000-1344c4752671" (UID: "01af9d70-86d3-4601-a000-1344c4752671"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.804322 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-client-ca" (OuterVolumeSpecName: "client-ca") pod "01af9d70-86d3-4601-a000-1344c4752671" (UID: "01af9d70-86d3-4601-a000-1344c4752671"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.804549 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0142e5c4-e4b5-458f-9b5e-59458769788c-client-ca" (OuterVolumeSpecName: "client-ca") pod "0142e5c4-e4b5-458f-9b5e-59458769788c" (UID: "0142e5c4-e4b5-458f-9b5e-59458769788c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.804583 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0142e5c4-e4b5-458f-9b5e-59458769788c-config" (OuterVolumeSpecName: "config") pod "0142e5c4-e4b5-458f-9b5e-59458769788c" (UID: "0142e5c4-e4b5-458f-9b5e-59458769788c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.810068 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01af9d70-86d3-4601-a000-1344c4752671-kube-api-access-bcv6d" (OuterVolumeSpecName: "kube-api-access-bcv6d") pod "01af9d70-86d3-4601-a000-1344c4752671" (UID: "01af9d70-86d3-4601-a000-1344c4752671"). InnerVolumeSpecName "kube-api-access-bcv6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.810155 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0142e5c4-e4b5-458f-9b5e-59458769788c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0142e5c4-e4b5-458f-9b5e-59458769788c" (UID: "0142e5c4-e4b5-458f-9b5e-59458769788c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.810428 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0142e5c4-e4b5-458f-9b5e-59458769788c-kube-api-access-g5z4n" (OuterVolumeSpecName: "kube-api-access-g5z4n") pod "0142e5c4-e4b5-458f-9b5e-59458769788c" (UID: "0142e5c4-e4b5-458f-9b5e-59458769788c"). InnerVolumeSpecName "kube-api-access-g5z4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.813997 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01af9d70-86d3-4601-a000-1344c4752671-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01af9d70-86d3-4601-a000-1344c4752671" (UID: "01af9d70-86d3-4601-a000-1344c4752671"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.904433 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01af9d70-86d3-4601-a000-1344c4752671-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.904476 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0142e5c4-e4b5-458f-9b5e-59458769788c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.904485 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcv6d\" (UniqueName: \"kubernetes.io/projected/01af9d70-86d3-4601-a000-1344c4752671-kube-api-access-bcv6d\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.904498 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.904508 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0142e5c4-e4b5-458f-9b5e-59458769788c-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.904517 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0142e5c4-e4b5-458f-9b5e-59458769788c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.904525 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01af9d70-86d3-4601-a000-1344c4752671-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:04 crc kubenswrapper[4768]: I0223 18:39:04.904534 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5z4n\" (UniqueName: \"kubernetes.io/projected/0142e5c4-e4b5-458f-9b5e-59458769788c-kube-api-access-g5z4n\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.076645 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b9cb6849f-jb9wp"] Feb 23 18:39:05 crc kubenswrapper[4768]: E0223 18:39:05.076959 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01af9d70-86d3-4601-a000-1344c4752671" containerName="controller-manager" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.076982 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="01af9d70-86d3-4601-a000-1344c4752671" containerName="controller-manager" Feb 23 18:39:05 crc kubenswrapper[4768]: E0223 18:39:05.077009 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0142e5c4-e4b5-458f-9b5e-59458769788c" containerName="route-controller-manager" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.077023 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0142e5c4-e4b5-458f-9b5e-59458769788c" containerName="route-controller-manager" Feb 23 18:39:05 crc kubenswrapper[4768]: E0223 18:39:05.077053 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.077068 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.077229 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="01af9d70-86d3-4601-a000-1344c4752671" containerName="controller-manager" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.077283 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0142e5c4-e4b5-458f-9b5e-59458769788c" containerName="route-controller-manager" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.077305 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.077839 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.090764 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b9cb6849f-jb9wp"] Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.109674 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-client-ca\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.109750 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-config\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.109803 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-proxy-ca-bundles\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.109838 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw9zh\" (UniqueName: \"kubernetes.io/projected/a66b6af0-cf59-4a5d-80a5-011420568ad3-kube-api-access-cw9zh\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.109933 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a66b6af0-cf59-4a5d-80a5-011420568ad3-serving-cert\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.187204 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp"] Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.187830 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.204153 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp"] Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.210234 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw9zh\" (UniqueName: \"kubernetes.io/projected/a66b6af0-cf59-4a5d-80a5-011420568ad3-kube-api-access-cw9zh\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.210342 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m299t\" (UniqueName: \"kubernetes.io/projected/84d8a56a-356d-4502-8e33-529dc8a026d4-kube-api-access-m299t\") pod \"route-controller-manager-9b464578f-kxvqp\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.210432 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84d8a56a-356d-4502-8e33-529dc8a026d4-client-ca\") pod \"route-controller-manager-9b464578f-kxvqp\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.210489 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d8a56a-356d-4502-8e33-529dc8a026d4-config\") pod \"route-controller-manager-9b464578f-kxvqp\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.210512 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a66b6af0-cf59-4a5d-80a5-011420568ad3-serving-cert\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.210590 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-client-ca\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.211643 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-config\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.212722 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-proxy-ca-bundles\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.211583 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-client-ca\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.212673 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-config\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.212802 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d8a56a-356d-4502-8e33-529dc8a026d4-serving-cert\") pod \"route-controller-manager-9b464578f-kxvqp\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.214283 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-proxy-ca-bundles\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.214334 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a66b6af0-cf59-4a5d-80a5-011420568ad3-serving-cert\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.237016 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw9zh\" (UniqueName: \"kubernetes.io/projected/a66b6af0-cf59-4a5d-80a5-011420568ad3-kube-api-access-cw9zh\") pod \"controller-manager-b9cb6849f-jb9wp\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.314586 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d8a56a-356d-4502-8e33-529dc8a026d4-config\") pod \"route-controller-manager-9b464578f-kxvqp\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.314752 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d8a56a-356d-4502-8e33-529dc8a026d4-serving-cert\") pod \"route-controller-manager-9b464578f-kxvqp\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.314820 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m299t\" (UniqueName: \"kubernetes.io/projected/84d8a56a-356d-4502-8e33-529dc8a026d4-kube-api-access-m299t\") pod \"route-controller-manager-9b464578f-kxvqp\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.314856 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84d8a56a-356d-4502-8e33-529dc8a026d4-client-ca\") pod \"route-controller-manager-9b464578f-kxvqp\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.316470 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84d8a56a-356d-4502-8e33-529dc8a026d4-client-ca\") pod \"route-controller-manager-9b464578f-kxvqp\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.316790 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d8a56a-356d-4502-8e33-529dc8a026d4-config\") pod \"route-controller-manager-9b464578f-kxvqp\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.317856 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d8a56a-356d-4502-8e33-529dc8a026d4-serving-cert\") pod \"route-controller-manager-9b464578f-kxvqp\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.331809 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m299t\" (UniqueName: \"kubernetes.io/projected/84d8a56a-356d-4502-8e33-529dc8a026d4-kube-api-access-m299t\") pod \"route-controller-manager-9b464578f-kxvqp\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.393706 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.485431 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b9cb6849f-jb9wp"] Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.485655 4768 generic.go:334] "Generic (PLEG): container finished" podID="0142e5c4-e4b5-458f-9b5e-59458769788c" containerID="afeb80715d3118076c0f90493e5fd72f8c64e30271ea407b7ecb1d5ec9d772d5" exitCode=0 Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.485725 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" event={"ID":"0142e5c4-e4b5-458f-9b5e-59458769788c","Type":"ContainerDied","Data":"afeb80715d3118076c0f90493e5fd72f8c64e30271ea407b7ecb1d5ec9d772d5"} Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.485756 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" event={"ID":"0142e5c4-e4b5-458f-9b5e-59458769788c","Type":"ContainerDied","Data":"ff38c6d8afb2de2f2257f3bf88fe1a16befb4d6e45bd8c9a884953807d98e682"} Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.485777 4768 scope.go:117] "RemoveContainer" containerID="afeb80715d3118076c0f90493e5fd72f8c64e30271ea407b7ecb1d5ec9d772d5" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.485911 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.496181 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" event={"ID":"01af9d70-86d3-4601-a000-1344c4752671","Type":"ContainerDied","Data":"816b3a62b63675fdc2aeca677951b0b50f84a3b29e50507f8858f52677b2cf23"} Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.496293 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8fb885f79-rxl5w" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.499717 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.526521 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp"] Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.529157 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92"] Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.546575 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d6775fbb6-c9t92"] Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.547202 4768 scope.go:117] "RemoveContainer" containerID="afeb80715d3118076c0f90493e5fd72f8c64e30271ea407b7ecb1d5ec9d772d5" Feb 23 18:39:05 crc kubenswrapper[4768]: E0223 18:39:05.550706 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afeb80715d3118076c0f90493e5fd72f8c64e30271ea407b7ecb1d5ec9d772d5\": container with ID starting with afeb80715d3118076c0f90493e5fd72f8c64e30271ea407b7ecb1d5ec9d772d5 not found: ID does not exist" containerID="afeb80715d3118076c0f90493e5fd72f8c64e30271ea407b7ecb1d5ec9d772d5" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.550759 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afeb80715d3118076c0f90493e5fd72f8c64e30271ea407b7ecb1d5ec9d772d5"} err="failed to get container status \"afeb80715d3118076c0f90493e5fd72f8c64e30271ea407b7ecb1d5ec9d772d5\": rpc error: code = NotFound desc = could not find container \"afeb80715d3118076c0f90493e5fd72f8c64e30271ea407b7ecb1d5ec9d772d5\": container with ID starting with afeb80715d3118076c0f90493e5fd72f8c64e30271ea407b7ecb1d5ec9d772d5 not found: ID does not exist" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.550793 4768 scope.go:117] "RemoveContainer" containerID="d0732a992483497898f300d86cdf8a50af1ad8383600ef6403a86dedc4b01015" Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.570887 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8fb885f79-rxl5w"] Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.580614 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8fb885f79-rxl5w"] Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.689226 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b9cb6849f-jb9wp"] Feb 23 18:39:05 crc kubenswrapper[4768]: I0223 18:39:05.759358 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp"] Feb 23 18:39:05 crc kubenswrapper[4768]: W0223 18:39:05.773028 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84d8a56a_356d_4502_8e33_529dc8a026d4.slice/crio-e6568d8b261af9cae49e47f31d019947a5a65737e8dbb718cd1723c29d382022 WatchSource:0}: Error finding container e6568d8b261af9cae49e47f31d019947a5a65737e8dbb718cd1723c29d382022: Status 404 returned error can't find the container with id e6568d8b261af9cae49e47f31d019947a5a65737e8dbb718cd1723c29d382022 Feb 23 18:39:06 crc kubenswrapper[4768]: I0223 18:39:06.504945 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" event={"ID":"84d8a56a-356d-4502-8e33-529dc8a026d4","Type":"ContainerStarted","Data":"d00b5dab51321da40b8915f007e60e56e531368aa16e5ff265eb9485b7042ccb"} Feb 23 18:39:06 crc kubenswrapper[4768]: I0223 18:39:06.506698 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" event={"ID":"84d8a56a-356d-4502-8e33-529dc8a026d4","Type":"ContainerStarted","Data":"e6568d8b261af9cae49e47f31d019947a5a65737e8dbb718cd1723c29d382022"} Feb 23 18:39:06 crc kubenswrapper[4768]: I0223 18:39:06.506780 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:06 crc kubenswrapper[4768]: I0223 18:39:06.505069 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" podUID="84d8a56a-356d-4502-8e33-529dc8a026d4" containerName="route-controller-manager" containerID="cri-o://d00b5dab51321da40b8915f007e60e56e531368aa16e5ff265eb9485b7042ccb" gracePeriod=30 Feb 23 18:39:06 crc kubenswrapper[4768]: I0223 18:39:06.507665 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" event={"ID":"a66b6af0-cf59-4a5d-80a5-011420568ad3","Type":"ContainerStarted","Data":"9149439e69645854293e4de7a1afe585ecf91ef161cbd6e88f6cfaa44855d9d8"} Feb 23 18:39:06 crc kubenswrapper[4768]: I0223 18:39:06.507726 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" event={"ID":"a66b6af0-cf59-4a5d-80a5-011420568ad3","Type":"ContainerStarted","Data":"8ca0fdf52f3c42c5fd0af24d79df4b4687d5cd8034be45e08171d82f162f116f"} Feb 23 18:39:06 crc kubenswrapper[4768]: I0223 18:39:06.507760 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:06 crc kubenswrapper[4768]: I0223 18:39:06.507769 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" podUID="a66b6af0-cf59-4a5d-80a5-011420568ad3" containerName="controller-manager" containerID="cri-o://9149439e69645854293e4de7a1afe585ecf91ef161cbd6e88f6cfaa44855d9d8" gracePeriod=30 Feb 23 18:39:06 crc kubenswrapper[4768]: I0223 18:39:06.512892 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:06 crc kubenswrapper[4768]: I0223 18:39:06.513482 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:06 crc kubenswrapper[4768]: I0223 18:39:06.534051 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" podStartSLOduration=1.5340265290000001 podStartE2EDuration="1.534026529s" podCreationTimestamp="2026-02-23 18:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:39:06.531800597 +0000 UTC m=+341.922286397" watchObservedRunningTime="2026-02-23 18:39:06.534026529 +0000 UTC m=+341.924512329" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.005054 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.008113 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.032406 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" podStartSLOduration=2.032374317 podStartE2EDuration="2.032374317s" podCreationTimestamp="2026-02-23 18:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:39:06.617715773 +0000 UTC m=+342.008201593" watchObservedRunningTime="2026-02-23 18:39:07.032374317 +0000 UTC m=+342.422860167" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.034697 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84d8a56a-356d-4502-8e33-529dc8a026d4-client-ca\") pod \"84d8a56a-356d-4502-8e33-529dc8a026d4\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.034740 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-config\") pod \"a66b6af0-cf59-4a5d-80a5-011420568ad3\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.034762 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw9zh\" (UniqueName: \"kubernetes.io/projected/a66b6af0-cf59-4a5d-80a5-011420568ad3-kube-api-access-cw9zh\") pod \"a66b6af0-cf59-4a5d-80a5-011420568ad3\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.034783 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-client-ca\") pod \"a66b6af0-cf59-4a5d-80a5-011420568ad3\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.034810 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m299t\" (UniqueName: \"kubernetes.io/projected/84d8a56a-356d-4502-8e33-529dc8a026d4-kube-api-access-m299t\") pod \"84d8a56a-356d-4502-8e33-529dc8a026d4\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.034834 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a66b6af0-cf59-4a5d-80a5-011420568ad3-serving-cert\") pod \"a66b6af0-cf59-4a5d-80a5-011420568ad3\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.034856 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d8a56a-356d-4502-8e33-529dc8a026d4-config\") pod \"84d8a56a-356d-4502-8e33-529dc8a026d4\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.034878 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-proxy-ca-bundles\") pod \"a66b6af0-cf59-4a5d-80a5-011420568ad3\" (UID: \"a66b6af0-cf59-4a5d-80a5-011420568ad3\") " Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.034897 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d8a56a-356d-4502-8e33-529dc8a026d4-serving-cert\") pod \"84d8a56a-356d-4502-8e33-529dc8a026d4\" (UID: \"84d8a56a-356d-4502-8e33-529dc8a026d4\") " Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.035734 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a66b6af0-cf59-4a5d-80a5-011420568ad3" (UID: "a66b6af0-cf59-4a5d-80a5-011420568ad3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.035764 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84d8a56a-356d-4502-8e33-529dc8a026d4-client-ca" (OuterVolumeSpecName: "client-ca") pod "84d8a56a-356d-4502-8e33-529dc8a026d4" (UID: "84d8a56a-356d-4502-8e33-529dc8a026d4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.035777 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a66b6af0-cf59-4a5d-80a5-011420568ad3" (UID: "a66b6af0-cf59-4a5d-80a5-011420568ad3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.035786 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84d8a56a-356d-4502-8e33-529dc8a026d4-config" (OuterVolumeSpecName: "config") pod "84d8a56a-356d-4502-8e33-529dc8a026d4" (UID: "84d8a56a-356d-4502-8e33-529dc8a026d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.035974 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.035994 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84d8a56a-356d-4502-8e33-529dc8a026d4-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.036008 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.036026 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84d8a56a-356d-4502-8e33-529dc8a026d4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.036467 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-config" (OuterVolumeSpecName: "config") pod "a66b6af0-cf59-4a5d-80a5-011420568ad3" (UID: "a66b6af0-cf59-4a5d-80a5-011420568ad3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.041135 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a66b6af0-cf59-4a5d-80a5-011420568ad3-kube-api-access-cw9zh" (OuterVolumeSpecName: "kube-api-access-cw9zh") pod "a66b6af0-cf59-4a5d-80a5-011420568ad3" (UID: "a66b6af0-cf59-4a5d-80a5-011420568ad3"). InnerVolumeSpecName "kube-api-access-cw9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.041324 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a66b6af0-cf59-4a5d-80a5-011420568ad3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a66b6af0-cf59-4a5d-80a5-011420568ad3" (UID: "a66b6af0-cf59-4a5d-80a5-011420568ad3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.044933 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8"] Feb 23 18:39:07 crc kubenswrapper[4768]: E0223 18:39:07.045233 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d8a56a-356d-4502-8e33-529dc8a026d4" containerName="route-controller-manager" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.045294 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d8a56a-356d-4502-8e33-529dc8a026d4" containerName="route-controller-manager" Feb 23 18:39:07 crc kubenswrapper[4768]: E0223 18:39:07.045325 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a66b6af0-cf59-4a5d-80a5-011420568ad3" containerName="controller-manager" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.045342 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a66b6af0-cf59-4a5d-80a5-011420568ad3" containerName="controller-manager" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.045499 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d8a56a-356d-4502-8e33-529dc8a026d4" containerName="route-controller-manager" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.045550 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a66b6af0-cf59-4a5d-80a5-011420568ad3" containerName="controller-manager" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.046719 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.046689 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d8a56a-356d-4502-8e33-529dc8a026d4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84d8a56a-356d-4502-8e33-529dc8a026d4" (UID: "84d8a56a-356d-4502-8e33-529dc8a026d4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.049644 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d8a56a-356d-4502-8e33-529dc8a026d4-kube-api-access-m299t" (OuterVolumeSpecName: "kube-api-access-m299t") pod "84d8a56a-356d-4502-8e33-529dc8a026d4" (UID: "84d8a56a-356d-4502-8e33-529dc8a026d4"). InnerVolumeSpecName "kube-api-access-m299t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.052483 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8"] Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.137299 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/260dcc2d-93a8-4dfb-9107-29bf3f790514-client-ca\") pod \"route-controller-manager-685f5864cd-x45b8\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.137466 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zljg\" (UniqueName: \"kubernetes.io/projected/260dcc2d-93a8-4dfb-9107-29bf3f790514-kube-api-access-6zljg\") pod \"route-controller-manager-685f5864cd-x45b8\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.137520 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/260dcc2d-93a8-4dfb-9107-29bf3f790514-serving-cert\") pod \"route-controller-manager-685f5864cd-x45b8\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.137550 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260dcc2d-93a8-4dfb-9107-29bf3f790514-config\") pod \"route-controller-manager-685f5864cd-x45b8\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.137621 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a66b6af0-cf59-4a5d-80a5-011420568ad3-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.137638 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cw9zh\" (UniqueName: \"kubernetes.io/projected/a66b6af0-cf59-4a5d-80a5-011420568ad3-kube-api-access-cw9zh\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.137654 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m299t\" (UniqueName: \"kubernetes.io/projected/84d8a56a-356d-4502-8e33-529dc8a026d4-kube-api-access-m299t\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.137669 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a66b6af0-cf59-4a5d-80a5-011420568ad3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.137683 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84d8a56a-356d-4502-8e33-529dc8a026d4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.239201 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/260dcc2d-93a8-4dfb-9107-29bf3f790514-client-ca\") pod \"route-controller-manager-685f5864cd-x45b8\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.239355 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zljg\" (UniqueName: \"kubernetes.io/projected/260dcc2d-93a8-4dfb-9107-29bf3f790514-kube-api-access-6zljg\") pod \"route-controller-manager-685f5864cd-x45b8\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.239427 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/260dcc2d-93a8-4dfb-9107-29bf3f790514-serving-cert\") pod \"route-controller-manager-685f5864cd-x45b8\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.239463 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260dcc2d-93a8-4dfb-9107-29bf3f790514-config\") pod \"route-controller-manager-685f5864cd-x45b8\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.240907 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/260dcc2d-93a8-4dfb-9107-29bf3f790514-client-ca\") pod \"route-controller-manager-685f5864cd-x45b8\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.241655 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260dcc2d-93a8-4dfb-9107-29bf3f790514-config\") pod \"route-controller-manager-685f5864cd-x45b8\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.245159 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/260dcc2d-93a8-4dfb-9107-29bf3f790514-serving-cert\") pod \"route-controller-manager-685f5864cd-x45b8\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.258739 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zljg\" (UniqueName: \"kubernetes.io/projected/260dcc2d-93a8-4dfb-9107-29bf3f790514-kube-api-access-6zljg\") pod \"route-controller-manager-685f5864cd-x45b8\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.331452 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0142e5c4-e4b5-458f-9b5e-59458769788c" path="/var/lib/kubelet/pods/0142e5c4-e4b5-458f-9b5e-59458769788c/volumes" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.332623 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01af9d70-86d3-4601-a000-1344c4752671" path="/var/lib/kubelet/pods/01af9d70-86d3-4601-a000-1344c4752671/volumes" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.372662 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.528855 4768 generic.go:334] "Generic (PLEG): container finished" podID="84d8a56a-356d-4502-8e33-529dc8a026d4" containerID="d00b5dab51321da40b8915f007e60e56e531368aa16e5ff265eb9485b7042ccb" exitCode=0 Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.529269 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" event={"ID":"84d8a56a-356d-4502-8e33-529dc8a026d4","Type":"ContainerDied","Data":"d00b5dab51321da40b8915f007e60e56e531368aa16e5ff265eb9485b7042ccb"} Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.529449 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" event={"ID":"84d8a56a-356d-4502-8e33-529dc8a026d4","Type":"ContainerDied","Data":"e6568d8b261af9cae49e47f31d019947a5a65737e8dbb718cd1723c29d382022"} Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.529504 4768 scope.go:117] "RemoveContainer" containerID="d00b5dab51321da40b8915f007e60e56e531368aa16e5ff265eb9485b7042ccb" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.529631 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.541197 4768 generic.go:334] "Generic (PLEG): container finished" podID="a66b6af0-cf59-4a5d-80a5-011420568ad3" containerID="9149439e69645854293e4de7a1afe585ecf91ef161cbd6e88f6cfaa44855d9d8" exitCode=0 Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.541309 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" event={"ID":"a66b6af0-cf59-4a5d-80a5-011420568ad3","Type":"ContainerDied","Data":"9149439e69645854293e4de7a1afe585ecf91ef161cbd6e88f6cfaa44855d9d8"} Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.541354 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" event={"ID":"a66b6af0-cf59-4a5d-80a5-011420568ad3","Type":"ContainerDied","Data":"8ca0fdf52f3c42c5fd0af24d79df4b4687d5cd8034be45e08171d82f162f116f"} Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.541460 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9cb6849f-jb9wp" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.570454 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp"] Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.575123 4768 scope.go:117] "RemoveContainer" containerID="d00b5dab51321da40b8915f007e60e56e531368aa16e5ff265eb9485b7042ccb" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.580284 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b464578f-kxvqp"] Feb 23 18:39:07 crc kubenswrapper[4768]: E0223 18:39:07.581020 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d00b5dab51321da40b8915f007e60e56e531368aa16e5ff265eb9485b7042ccb\": container with ID starting with d00b5dab51321da40b8915f007e60e56e531368aa16e5ff265eb9485b7042ccb not found: ID does not exist" containerID="d00b5dab51321da40b8915f007e60e56e531368aa16e5ff265eb9485b7042ccb" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.581096 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00b5dab51321da40b8915f007e60e56e531368aa16e5ff265eb9485b7042ccb"} err="failed to get container status \"d00b5dab51321da40b8915f007e60e56e531368aa16e5ff265eb9485b7042ccb\": rpc error: code = NotFound desc = could not find container \"d00b5dab51321da40b8915f007e60e56e531368aa16e5ff265eb9485b7042ccb\": container with ID starting with d00b5dab51321da40b8915f007e60e56e531368aa16e5ff265eb9485b7042ccb not found: ID does not exist" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.581135 4768 scope.go:117] "RemoveContainer" containerID="9149439e69645854293e4de7a1afe585ecf91ef161cbd6e88f6cfaa44855d9d8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.586599 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b9cb6849f-jb9wp"] Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.595452 4768 scope.go:117] "RemoveContainer" containerID="9149439e69645854293e4de7a1afe585ecf91ef161cbd6e88f6cfaa44855d9d8" Feb 23 18:39:07 crc kubenswrapper[4768]: E0223 18:39:07.595912 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9149439e69645854293e4de7a1afe585ecf91ef161cbd6e88f6cfaa44855d9d8\": container with ID starting with 9149439e69645854293e4de7a1afe585ecf91ef161cbd6e88f6cfaa44855d9d8 not found: ID does not exist" containerID="9149439e69645854293e4de7a1afe585ecf91ef161cbd6e88f6cfaa44855d9d8" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.595969 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9149439e69645854293e4de7a1afe585ecf91ef161cbd6e88f6cfaa44855d9d8"} err="failed to get container status \"9149439e69645854293e4de7a1afe585ecf91ef161cbd6e88f6cfaa44855d9d8\": rpc error: code = NotFound desc = could not find container \"9149439e69645854293e4de7a1afe585ecf91ef161cbd6e88f6cfaa44855d9d8\": container with ID starting with 9149439e69645854293e4de7a1afe585ecf91ef161cbd6e88f6cfaa44855d9d8 not found: ID does not exist" Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.596230 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b9cb6849f-jb9wp"] Feb 23 18:39:07 crc kubenswrapper[4768]: I0223 18:39:07.802914 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8"] Feb 23 18:39:07 crc kubenswrapper[4768]: W0223 18:39:07.811595 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod260dcc2d_93a8_4dfb_9107_29bf3f790514.slice/crio-80f33b314c3d9ac80cd4faf5fb9bec26b65844a9ac7cec6a58eb08cff933ca49 WatchSource:0}: Error finding container 80f33b314c3d9ac80cd4faf5fb9bec26b65844a9ac7cec6a58eb08cff933ca49: Status 404 returned error can't find the container with id 80f33b314c3d9ac80cd4faf5fb9bec26b65844a9ac7cec6a58eb08cff933ca49 Feb 23 18:39:08 crc kubenswrapper[4768]: I0223 18:39:08.551066 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" event={"ID":"260dcc2d-93a8-4dfb-9107-29bf3f790514","Type":"ContainerStarted","Data":"91071b9c224de591e752d1b65e363faf3318eaa4c76e096b34d4624460462789"} Feb 23 18:39:08 crc kubenswrapper[4768]: I0223 18:39:08.551550 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:08 crc kubenswrapper[4768]: I0223 18:39:08.551566 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" event={"ID":"260dcc2d-93a8-4dfb-9107-29bf3f790514","Type":"ContainerStarted","Data":"80f33b314c3d9ac80cd4faf5fb9bec26b65844a9ac7cec6a58eb08cff933ca49"} Feb 23 18:39:08 crc kubenswrapper[4768]: I0223 18:39:08.557218 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:08 crc kubenswrapper[4768]: I0223 18:39:08.573780 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" podStartSLOduration=3.573758428 podStartE2EDuration="3.573758428s" podCreationTimestamp="2026-02-23 18:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:39:08.572444593 +0000 UTC m=+343.962930403" watchObservedRunningTime="2026-02-23 18:39:08.573758428 +0000 UTC m=+343.964244228" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.181269 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dfbd889b8-8pqz6"] Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.182221 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.184846 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.185778 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.187382 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.187433 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.190976 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.191369 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.198817 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.207820 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dfbd889b8-8pqz6"] Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.324120 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d8a56a-356d-4502-8e33-529dc8a026d4" path="/var/lib/kubelet/pods/84d8a56a-356d-4502-8e33-529dc8a026d4/volumes" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.325223 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a66b6af0-cf59-4a5d-80a5-011420568ad3" path="/var/lib/kubelet/pods/a66b6af0-cf59-4a5d-80a5-011420568ad3/volumes" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.372522 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzc86\" (UniqueName: \"kubernetes.io/projected/0a56e6a0-5d41-4e42-af22-9983161bd769-kube-api-access-gzc86\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.372614 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-proxy-ca-bundles\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.372659 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a56e6a0-5d41-4e42-af22-9983161bd769-serving-cert\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.372701 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-client-ca\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.372742 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-config\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.475324 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzc86\" (UniqueName: \"kubernetes.io/projected/0a56e6a0-5d41-4e42-af22-9983161bd769-kube-api-access-gzc86\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.475945 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-proxy-ca-bundles\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.476161 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a56e6a0-5d41-4e42-af22-9983161bd769-serving-cert\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.476312 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-client-ca\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.476446 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-config\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.479290 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-client-ca\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.479771 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-config\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.479790 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-proxy-ca-bundles\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.487223 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a56e6a0-5d41-4e42-af22-9983161bd769-serving-cert\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.509100 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzc86\" (UniqueName: \"kubernetes.io/projected/0a56e6a0-5d41-4e42-af22-9983161bd769-kube-api-access-gzc86\") pod \"controller-manager-dfbd889b8-8pqz6\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.528088 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:09 crc kubenswrapper[4768]: I0223 18:39:09.952584 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dfbd889b8-8pqz6"] Feb 23 18:39:09 crc kubenswrapper[4768]: W0223 18:39:09.959820 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a56e6a0_5d41_4e42_af22_9983161bd769.slice/crio-e6e9510c4951e9490f6f42eb68dda4c1b3884738e00282787056b0ba1b979ece WatchSource:0}: Error finding container e6e9510c4951e9490f6f42eb68dda4c1b3884738e00282787056b0ba1b979ece: Status 404 returned error can't find the container with id e6e9510c4951e9490f6f42eb68dda4c1b3884738e00282787056b0ba1b979ece Feb 23 18:39:10 crc kubenswrapper[4768]: I0223 18:39:10.566727 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" event={"ID":"0a56e6a0-5d41-4e42-af22-9983161bd769","Type":"ContainerStarted","Data":"14edf37676fb9add48bca8117c63728b79ac542c1691fe738ac292dddedb655c"} Feb 23 18:39:10 crc kubenswrapper[4768]: I0223 18:39:10.566770 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" event={"ID":"0a56e6a0-5d41-4e42-af22-9983161bd769","Type":"ContainerStarted","Data":"e6e9510c4951e9490f6f42eb68dda4c1b3884738e00282787056b0ba1b979ece"} Feb 23 18:39:10 crc kubenswrapper[4768]: I0223 18:39:10.584440 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" podStartSLOduration=5.5844237880000005 podStartE2EDuration="5.584423788s" podCreationTimestamp="2026-02-23 18:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:39:10.582058213 +0000 UTC m=+345.972544013" watchObservedRunningTime="2026-02-23 18:39:10.584423788 +0000 UTC m=+345.974909588" Feb 23 18:39:11 crc kubenswrapper[4768]: I0223 18:39:11.573311 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:11 crc kubenswrapper[4768]: I0223 18:39:11.580456 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.231511 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dfbd889b8-8pqz6"] Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.232948 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" podUID="0a56e6a0-5d41-4e42-af22-9983161bd769" containerName="controller-manager" containerID="cri-o://14edf37676fb9add48bca8117c63728b79ac542c1691fe738ac292dddedb655c" gracePeriod=30 Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.242072 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8"] Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.242439 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" podUID="260dcc2d-93a8-4dfb-9107-29bf3f790514" containerName="route-controller-manager" containerID="cri-o://91071b9c224de591e752d1b65e363faf3318eaa4c76e096b34d4624460462789" gracePeriod=30 Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.712199 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.797732 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260dcc2d-93a8-4dfb-9107-29bf3f790514-config\") pod \"260dcc2d-93a8-4dfb-9107-29bf3f790514\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.797844 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/260dcc2d-93a8-4dfb-9107-29bf3f790514-client-ca\") pod \"260dcc2d-93a8-4dfb-9107-29bf3f790514\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.797968 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/260dcc2d-93a8-4dfb-9107-29bf3f790514-serving-cert\") pod \"260dcc2d-93a8-4dfb-9107-29bf3f790514\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.798094 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zljg\" (UniqueName: \"kubernetes.io/projected/260dcc2d-93a8-4dfb-9107-29bf3f790514-kube-api-access-6zljg\") pod \"260dcc2d-93a8-4dfb-9107-29bf3f790514\" (UID: \"260dcc2d-93a8-4dfb-9107-29bf3f790514\") " Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.798943 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/260dcc2d-93a8-4dfb-9107-29bf3f790514-config" (OuterVolumeSpecName: "config") pod "260dcc2d-93a8-4dfb-9107-29bf3f790514" (UID: "260dcc2d-93a8-4dfb-9107-29bf3f790514"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.799298 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/260dcc2d-93a8-4dfb-9107-29bf3f790514-client-ca" (OuterVolumeSpecName: "client-ca") pod "260dcc2d-93a8-4dfb-9107-29bf3f790514" (UID: "260dcc2d-93a8-4dfb-9107-29bf3f790514"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.808670 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/260dcc2d-93a8-4dfb-9107-29bf3f790514-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "260dcc2d-93a8-4dfb-9107-29bf3f790514" (UID: "260dcc2d-93a8-4dfb-9107-29bf3f790514"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.808974 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/260dcc2d-93a8-4dfb-9107-29bf3f790514-kube-api-access-6zljg" (OuterVolumeSpecName: "kube-api-access-6zljg") pod "260dcc2d-93a8-4dfb-9107-29bf3f790514" (UID: "260dcc2d-93a8-4dfb-9107-29bf3f790514"). InnerVolumeSpecName "kube-api-access-6zljg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.812522 4768 generic.go:334] "Generic (PLEG): container finished" podID="0a56e6a0-5d41-4e42-af22-9983161bd769" containerID="14edf37676fb9add48bca8117c63728b79ac542c1691fe738ac292dddedb655c" exitCode=0 Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.812632 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" event={"ID":"0a56e6a0-5d41-4e42-af22-9983161bd769","Type":"ContainerDied","Data":"14edf37676fb9add48bca8117c63728b79ac542c1691fe738ac292dddedb655c"} Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.812692 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" event={"ID":"0a56e6a0-5d41-4e42-af22-9983161bd769","Type":"ContainerDied","Data":"e6e9510c4951e9490f6f42eb68dda4c1b3884738e00282787056b0ba1b979ece"} Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.812708 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6e9510c4951e9490f6f42eb68dda4c1b3884738e00282787056b0ba1b979ece" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.814555 4768 generic.go:334] "Generic (PLEG): container finished" podID="260dcc2d-93a8-4dfb-9107-29bf3f790514" containerID="91071b9c224de591e752d1b65e363faf3318eaa4c76e096b34d4624460462789" exitCode=0 Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.814598 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.814619 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" event={"ID":"260dcc2d-93a8-4dfb-9107-29bf3f790514","Type":"ContainerDied","Data":"91071b9c224de591e752d1b65e363faf3318eaa4c76e096b34d4624460462789"} Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.814668 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8" event={"ID":"260dcc2d-93a8-4dfb-9107-29bf3f790514","Type":"ContainerDied","Data":"80f33b314c3d9ac80cd4faf5fb9bec26b65844a9ac7cec6a58eb08cff933ca49"} Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.814695 4768 scope.go:117] "RemoveContainer" containerID="91071b9c224de591e752d1b65e363faf3318eaa4c76e096b34d4624460462789" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.841575 4768 scope.go:117] "RemoveContainer" containerID="91071b9c224de591e752d1b65e363faf3318eaa4c76e096b34d4624460462789" Feb 23 18:39:44 crc kubenswrapper[4768]: E0223 18:39:44.842455 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91071b9c224de591e752d1b65e363faf3318eaa4c76e096b34d4624460462789\": container with ID starting with 91071b9c224de591e752d1b65e363faf3318eaa4c76e096b34d4624460462789 not found: ID does not exist" containerID="91071b9c224de591e752d1b65e363faf3318eaa4c76e096b34d4624460462789" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.842557 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91071b9c224de591e752d1b65e363faf3318eaa4c76e096b34d4624460462789"} err="failed to get container status \"91071b9c224de591e752d1b65e363faf3318eaa4c76e096b34d4624460462789\": rpc error: code = NotFound desc = could not find container \"91071b9c224de591e752d1b65e363faf3318eaa4c76e096b34d4624460462789\": container with ID starting with 91071b9c224de591e752d1b65e363faf3318eaa4c76e096b34d4624460462789 not found: ID does not exist" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.846564 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.877827 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8"] Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.881870 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-685f5864cd-x45b8"] Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.900505 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zljg\" (UniqueName: \"kubernetes.io/projected/260dcc2d-93a8-4dfb-9107-29bf3f790514-kube-api-access-6zljg\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.900553 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260dcc2d-93a8-4dfb-9107-29bf3f790514-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.900564 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/260dcc2d-93a8-4dfb-9107-29bf3f790514-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:44 crc kubenswrapper[4768]: I0223 18:39:44.900574 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/260dcc2d-93a8-4dfb-9107-29bf3f790514-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.001053 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-config\") pod \"0a56e6a0-5d41-4e42-af22-9983161bd769\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.001111 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-client-ca\") pod \"0a56e6a0-5d41-4e42-af22-9983161bd769\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.001154 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzc86\" (UniqueName: \"kubernetes.io/projected/0a56e6a0-5d41-4e42-af22-9983161bd769-kube-api-access-gzc86\") pod \"0a56e6a0-5d41-4e42-af22-9983161bd769\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.001224 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-proxy-ca-bundles\") pod \"0a56e6a0-5d41-4e42-af22-9983161bd769\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.001268 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a56e6a0-5d41-4e42-af22-9983161bd769-serving-cert\") pod \"0a56e6a0-5d41-4e42-af22-9983161bd769\" (UID: \"0a56e6a0-5d41-4e42-af22-9983161bd769\") " Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.002145 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-client-ca" (OuterVolumeSpecName: "client-ca") pod "0a56e6a0-5d41-4e42-af22-9983161bd769" (UID: "0a56e6a0-5d41-4e42-af22-9983161bd769"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.002158 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-config" (OuterVolumeSpecName: "config") pod "0a56e6a0-5d41-4e42-af22-9983161bd769" (UID: "0a56e6a0-5d41-4e42-af22-9983161bd769"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.003175 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0a56e6a0-5d41-4e42-af22-9983161bd769" (UID: "0a56e6a0-5d41-4e42-af22-9983161bd769"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.004883 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a56e6a0-5d41-4e42-af22-9983161bd769-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0a56e6a0-5d41-4e42-af22-9983161bd769" (UID: "0a56e6a0-5d41-4e42-af22-9983161bd769"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.006090 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a56e6a0-5d41-4e42-af22-9983161bd769-kube-api-access-gzc86" (OuterVolumeSpecName: "kube-api-access-gzc86") pod "0a56e6a0-5d41-4e42-af22-9983161bd769" (UID: "0a56e6a0-5d41-4e42-af22-9983161bd769"). InnerVolumeSpecName "kube-api-access-gzc86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.103873 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.103928 4768 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-client-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.103942 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzc86\" (UniqueName: \"kubernetes.io/projected/0a56e6a0-5d41-4e42-af22-9983161bd769-kube-api-access-gzc86\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.103956 4768 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0a56e6a0-5d41-4e42-af22-9983161bd769-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.103968 4768 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a56e6a0-5d41-4e42-af22-9983161bd769-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.319436 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="260dcc2d-93a8-4dfb-9107-29bf3f790514" path="/var/lib/kubelet/pods/260dcc2d-93a8-4dfb-9107-29bf3f790514/volumes" Feb 23 18:39:45 crc kubenswrapper[4768]: I0223 18:39:45.826301 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dfbd889b8-8pqz6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:45.861759 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dfbd889b8-8pqz6"] Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:45.867399 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dfbd889b8-8pqz6"] Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.208340 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b9cb6849f-lm7lq"] Feb 23 18:39:46 crc kubenswrapper[4768]: E0223 18:39:46.208619 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a56e6a0-5d41-4e42-af22-9983161bd769" containerName="controller-manager" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.208634 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a56e6a0-5d41-4e42-af22-9983161bd769" containerName="controller-manager" Feb 23 18:39:46 crc kubenswrapper[4768]: E0223 18:39:46.208668 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260dcc2d-93a8-4dfb-9107-29bf3f790514" containerName="route-controller-manager" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.208677 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="260dcc2d-93a8-4dfb-9107-29bf3f790514" containerName="route-controller-manager" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.208791 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="260dcc2d-93a8-4dfb-9107-29bf3f790514" containerName="route-controller-manager" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.208807 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a56e6a0-5d41-4e42-af22-9983161bd769" containerName="controller-manager" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.209278 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.210970 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6"] Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.211936 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.213341 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.213346 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.213514 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.213716 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.214010 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.226062 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.227722 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.227932 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.228018 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.228060 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.229404 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.234511 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.234658 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b9cb6849f-lm7lq"] Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.239647 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.241886 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6"] Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.322534 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr5q2\" (UniqueName: \"kubernetes.io/projected/9efc27f8-c4a2-4c46-b4a3-38d35bc3c011-kube-api-access-rr5q2\") pod \"route-controller-manager-9b464578f-vddx6\" (UID: \"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.322578 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9efc27f8-c4a2-4c46-b4a3-38d35bc3c011-client-ca\") pod \"route-controller-manager-9b464578f-vddx6\" (UID: \"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.322709 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9efc27f8-c4a2-4c46-b4a3-38d35bc3c011-serving-cert\") pod \"route-controller-manager-9b464578f-vddx6\" (UID: \"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.322813 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9635ec5f-c286-4956-9cd2-469538838b45-serving-cert\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.322941 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9635ec5f-c286-4956-9cd2-469538838b45-proxy-ca-bundles\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.323060 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efc27f8-c4a2-4c46-b4a3-38d35bc3c011-config\") pod \"route-controller-manager-9b464578f-vddx6\" (UID: \"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.323144 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9635ec5f-c286-4956-9cd2-469538838b45-client-ca\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.323218 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcxmv\" (UniqueName: \"kubernetes.io/projected/9635ec5f-c286-4956-9cd2-469538838b45-kube-api-access-zcxmv\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.323495 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9635ec5f-c286-4956-9cd2-469538838b45-config\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.424952 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcxmv\" (UniqueName: \"kubernetes.io/projected/9635ec5f-c286-4956-9cd2-469538838b45-kube-api-access-zcxmv\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.425070 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9635ec5f-c286-4956-9cd2-469538838b45-config\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.425104 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr5q2\" (UniqueName: \"kubernetes.io/projected/9efc27f8-c4a2-4c46-b4a3-38d35bc3c011-kube-api-access-rr5q2\") pod \"route-controller-manager-9b464578f-vddx6\" (UID: \"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.425136 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9efc27f8-c4a2-4c46-b4a3-38d35bc3c011-client-ca\") pod \"route-controller-manager-9b464578f-vddx6\" (UID: \"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.425164 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9efc27f8-c4a2-4c46-b4a3-38d35bc3c011-serving-cert\") pod \"route-controller-manager-9b464578f-vddx6\" (UID: \"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.425199 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9635ec5f-c286-4956-9cd2-469538838b45-serving-cert\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.425230 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9635ec5f-c286-4956-9cd2-469538838b45-proxy-ca-bundles\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.425297 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efc27f8-c4a2-4c46-b4a3-38d35bc3c011-config\") pod \"route-controller-manager-9b464578f-vddx6\" (UID: \"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.425331 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9635ec5f-c286-4956-9cd2-469538838b45-client-ca\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.426668 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9635ec5f-c286-4956-9cd2-469538838b45-client-ca\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.427983 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9efc27f8-c4a2-4c46-b4a3-38d35bc3c011-client-ca\") pod \"route-controller-manager-9b464578f-vddx6\" (UID: \"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.428428 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efc27f8-c4a2-4c46-b4a3-38d35bc3c011-config\") pod \"route-controller-manager-9b464578f-vddx6\" (UID: \"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.429118 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9635ec5f-c286-4956-9cd2-469538838b45-proxy-ca-bundles\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.432788 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9635ec5f-c286-4956-9cd2-469538838b45-config\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.435956 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9efc27f8-c4a2-4c46-b4a3-38d35bc3c011-serving-cert\") pod \"route-controller-manager-9b464578f-vddx6\" (UID: \"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.438624 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6jj9v"] Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.439054 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9635ec5f-c286-4956-9cd2-469538838b45-serving-cert\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.439771 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.452483 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcxmv\" (UniqueName: \"kubernetes.io/projected/9635ec5f-c286-4956-9cd2-469538838b45-kube-api-access-zcxmv\") pod \"controller-manager-b9cb6849f-lm7lq\" (UID: \"9635ec5f-c286-4956-9cd2-469538838b45\") " pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.463806 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6jj9v"] Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.465209 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr5q2\" (UniqueName: \"kubernetes.io/projected/9efc27f8-c4a2-4c46-b4a3-38d35bc3c011-kube-api-access-rr5q2\") pod \"route-controller-manager-9b464578f-vddx6\" (UID: \"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011\") " pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.526528 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.526624 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e7faa78-90e0-477b-a471-fd0e5d95983e-registry-tls\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.526660 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e7faa78-90e0-477b-a471-fd0e5d95983e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.526709 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e7faa78-90e0-477b-a471-fd0e5d95983e-bound-sa-token\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.526755 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfn5h\" (UniqueName: \"kubernetes.io/projected/2e7faa78-90e0-477b-a471-fd0e5d95983e-kube-api-access-kfn5h\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.526815 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e7faa78-90e0-477b-a471-fd0e5d95983e-registry-certificates\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.526891 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e7faa78-90e0-477b-a471-fd0e5d95983e-trusted-ca\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.526956 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e7faa78-90e0-477b-a471-fd0e5d95983e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.545764 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.546877 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.559155 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.629913 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e7faa78-90e0-477b-a471-fd0e5d95983e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.630450 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e7faa78-90e0-477b-a471-fd0e5d95983e-registry-tls\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.630474 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e7faa78-90e0-477b-a471-fd0e5d95983e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.630494 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e7faa78-90e0-477b-a471-fd0e5d95983e-bound-sa-token\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.630522 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfn5h\" (UniqueName: \"kubernetes.io/projected/2e7faa78-90e0-477b-a471-fd0e5d95983e-kube-api-access-kfn5h\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.630542 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e7faa78-90e0-477b-a471-fd0e5d95983e-registry-certificates\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.630583 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e7faa78-90e0-477b-a471-fd0e5d95983e-trusted-ca\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.631519 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e7faa78-90e0-477b-a471-fd0e5d95983e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.632218 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e7faa78-90e0-477b-a471-fd0e5d95983e-registry-certificates\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.632900 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e7faa78-90e0-477b-a471-fd0e5d95983e-trusted-ca\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.634551 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e7faa78-90e0-477b-a471-fd0e5d95983e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.636972 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e7faa78-90e0-477b-a471-fd0e5d95983e-registry-tls\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.654054 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfn5h\" (UniqueName: \"kubernetes.io/projected/2e7faa78-90e0-477b-a471-fd0e5d95983e-kube-api-access-kfn5h\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.655562 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e7faa78-90e0-477b-a471-fd0e5d95983e-bound-sa-token\") pod \"image-registry-66df7c8f76-6jj9v\" (UID: \"2e7faa78-90e0-477b-a471-fd0e5d95983e\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.762644 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6"] Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.815862 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.855003 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b9cb6849f-lm7lq"] Feb 23 18:39:46 crc kubenswrapper[4768]: W0223 18:39:46.862967 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9635ec5f_c286_4956_9cd2_469538838b45.slice/crio-a902146ae1c4c33e45bd4770d1e9a8a914f9855b96dad84c471d4db9f4af92fd WatchSource:0}: Error finding container a902146ae1c4c33e45bd4770d1e9a8a914f9855b96dad84c471d4db9f4af92fd: Status 404 returned error can't find the container with id a902146ae1c4c33e45bd4770d1e9a8a914f9855b96dad84c471d4db9f4af92fd Feb 23 18:39:46 crc kubenswrapper[4768]: I0223 18:39:46.867164 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" event={"ID":"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011","Type":"ContainerStarted","Data":"e5f5adc50550bc036c91dbb3d76aadde593ff2ad4ee596b08a16b6f416ce5fbd"} Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.286271 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6jj9v"] Feb 23 18:39:47 crc kubenswrapper[4768]: W0223 18:39:47.294120 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e7faa78_90e0_477b_a471_fd0e5d95983e.slice/crio-fb477cbc6dcbe9f2178333579c133719ffe65ef313e9ecf042219b67e79b3c6f WatchSource:0}: Error finding container fb477cbc6dcbe9f2178333579c133719ffe65ef313e9ecf042219b67e79b3c6f: Status 404 returned error can't find the container with id fb477cbc6dcbe9f2178333579c133719ffe65ef313e9ecf042219b67e79b3c6f Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.318046 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a56e6a0-5d41-4e42-af22-9983161bd769" path="/var/lib/kubelet/pods/0a56e6a0-5d41-4e42-af22-9983161bd769/volumes" Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.876730 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" event={"ID":"9635ec5f-c286-4956-9cd2-469538838b45","Type":"ContainerStarted","Data":"954bc1e3c273a44916201f18b770f761642db60399095fbcd3a5719bc51ac941"} Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.877319 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" event={"ID":"9635ec5f-c286-4956-9cd2-469538838b45","Type":"ContainerStarted","Data":"a902146ae1c4c33e45bd4770d1e9a8a914f9855b96dad84c471d4db9f4af92fd"} Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.878308 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.887446 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" event={"ID":"9efc27f8-c4a2-4c46-b4a3-38d35bc3c011","Type":"ContainerStarted","Data":"28485997ccedbe79523105b12cc16584b5047a40f5c0a928c67ab575029bff0c"} Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.889691 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.890214 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.897912 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b9cb6849f-lm7lq" podStartSLOduration=3.897888965 podStartE2EDuration="3.897888965s" podCreationTimestamp="2026-02-23 18:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:39:47.895000326 +0000 UTC m=+383.285486166" watchObservedRunningTime="2026-02-23 18:39:47.897888965 +0000 UTC m=+383.288374805" Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.903300 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.908775 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" event={"ID":"2e7faa78-90e0-477b-a471-fd0e5d95983e","Type":"ContainerStarted","Data":"798f4a3e99d4df13efb4c46a3b4e146d360805339e5baf293c2595eb9ccb60d1"} Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.908840 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" event={"ID":"2e7faa78-90e0-477b-a471-fd0e5d95983e","Type":"ContainerStarted","Data":"fb477cbc6dcbe9f2178333579c133719ffe65ef313e9ecf042219b67e79b3c6f"} Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.909807 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.949122 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9b464578f-vddx6" podStartSLOduration=3.949101876 podStartE2EDuration="3.949101876s" podCreationTimestamp="2026-02-23 18:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:39:47.94453759 +0000 UTC m=+383.335023440" watchObservedRunningTime="2026-02-23 18:39:47.949101876 +0000 UTC m=+383.339587686" Feb 23 18:39:47 crc kubenswrapper[4768]: I0223 18:39:47.980753 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" podStartSLOduration=1.980732316 podStartE2EDuration="1.980732316s" podCreationTimestamp="2026-02-23 18:39:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:39:47.977856737 +0000 UTC m=+383.368342547" watchObservedRunningTime="2026-02-23 18:39:47.980732316 +0000 UTC m=+383.371218126" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.095514 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z5m2c"] Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.097846 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.101419 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.103975 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlgjw\" (UniqueName: \"kubernetes.io/projected/03532675-9efc-4d5c-ae55-5c9e1d240346-kube-api-access-wlgjw\") pod \"redhat-operators-z5m2c\" (UID: \"03532675-9efc-4d5c-ae55-5c9e1d240346\") " pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.104114 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03532675-9efc-4d5c-ae55-5c9e1d240346-utilities\") pod \"redhat-operators-z5m2c\" (UID: \"03532675-9efc-4d5c-ae55-5c9e1d240346\") " pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.104210 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03532675-9efc-4d5c-ae55-5c9e1d240346-catalog-content\") pod \"redhat-operators-z5m2c\" (UID: \"03532675-9efc-4d5c-ae55-5c9e1d240346\") " pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.116737 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5m2c"] Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.205520 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03532675-9efc-4d5c-ae55-5c9e1d240346-utilities\") pod \"redhat-operators-z5m2c\" (UID: \"03532675-9efc-4d5c-ae55-5c9e1d240346\") " pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.205610 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03532675-9efc-4d5c-ae55-5c9e1d240346-catalog-content\") pod \"redhat-operators-z5m2c\" (UID: \"03532675-9efc-4d5c-ae55-5c9e1d240346\") " pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.205695 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlgjw\" (UniqueName: \"kubernetes.io/projected/03532675-9efc-4d5c-ae55-5c9e1d240346-kube-api-access-wlgjw\") pod \"redhat-operators-z5m2c\" (UID: \"03532675-9efc-4d5c-ae55-5c9e1d240346\") " pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.206386 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03532675-9efc-4d5c-ae55-5c9e1d240346-utilities\") pod \"redhat-operators-z5m2c\" (UID: \"03532675-9efc-4d5c-ae55-5c9e1d240346\") " pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.206443 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03532675-9efc-4d5c-ae55-5c9e1d240346-catalog-content\") pod \"redhat-operators-z5m2c\" (UID: \"03532675-9efc-4d5c-ae55-5c9e1d240346\") " pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.236156 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlgjw\" (UniqueName: \"kubernetes.io/projected/03532675-9efc-4d5c-ae55-5c9e1d240346-kube-api-access-wlgjw\") pod \"redhat-operators-z5m2c\" (UID: \"03532675-9efc-4d5c-ae55-5c9e1d240346\") " pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.280031 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w88p6"] Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.281347 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.283616 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.300382 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w88p6"] Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.307083 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b420d54-c936-4147-8f04-18d8c91b1701-catalog-content\") pod \"certified-operators-w88p6\" (UID: \"0b420d54-c936-4147-8f04-18d8c91b1701\") " pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.307158 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69c4g\" (UniqueName: \"kubernetes.io/projected/0b420d54-c936-4147-8f04-18d8c91b1701-kube-api-access-69c4g\") pod \"certified-operators-w88p6\" (UID: \"0b420d54-c936-4147-8f04-18d8c91b1701\") " pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.307223 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b420d54-c936-4147-8f04-18d8c91b1701-utilities\") pod \"certified-operators-w88p6\" (UID: \"0b420d54-c936-4147-8f04-18d8c91b1701\") " pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.408753 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69c4g\" (UniqueName: \"kubernetes.io/projected/0b420d54-c936-4147-8f04-18d8c91b1701-kube-api-access-69c4g\") pod \"certified-operators-w88p6\" (UID: \"0b420d54-c936-4147-8f04-18d8c91b1701\") " pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.408856 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b420d54-c936-4147-8f04-18d8c91b1701-utilities\") pod \"certified-operators-w88p6\" (UID: \"0b420d54-c936-4147-8f04-18d8c91b1701\") " pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.409620 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b420d54-c936-4147-8f04-18d8c91b1701-utilities\") pod \"certified-operators-w88p6\" (UID: \"0b420d54-c936-4147-8f04-18d8c91b1701\") " pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.410269 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b420d54-c936-4147-8f04-18d8c91b1701-catalog-content\") pod \"certified-operators-w88p6\" (UID: \"0b420d54-c936-4147-8f04-18d8c91b1701\") " pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.410779 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b420d54-c936-4147-8f04-18d8c91b1701-catalog-content\") pod \"certified-operators-w88p6\" (UID: \"0b420d54-c936-4147-8f04-18d8c91b1701\") " pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.430071 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69c4g\" (UniqueName: \"kubernetes.io/projected/0b420d54-c936-4147-8f04-18d8c91b1701-kube-api-access-69c4g\") pod \"certified-operators-w88p6\" (UID: \"0b420d54-c936-4147-8f04-18d8c91b1701\") " pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.442016 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.625089 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:03 crc kubenswrapper[4768]: I0223 18:40:03.965418 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5m2c"] Feb 23 18:40:03 crc kubenswrapper[4768]: W0223 18:40:03.968287 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03532675_9efc_4d5c_ae55_5c9e1d240346.slice/crio-3cef8207ab511587442b2e5101e57644961fd9b405e3f270de9377b694b264e9 WatchSource:0}: Error finding container 3cef8207ab511587442b2e5101e57644961fd9b405e3f270de9377b694b264e9: Status 404 returned error can't find the container with id 3cef8207ab511587442b2e5101e57644961fd9b405e3f270de9377b694b264e9 Feb 23 18:40:04 crc kubenswrapper[4768]: I0223 18:40:04.021355 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5m2c" event={"ID":"03532675-9efc-4d5c-ae55-5c9e1d240346","Type":"ContainerStarted","Data":"3cef8207ab511587442b2e5101e57644961fd9b405e3f270de9377b694b264e9"} Feb 23 18:40:04 crc kubenswrapper[4768]: I0223 18:40:04.091722 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w88p6"] Feb 23 18:40:04 crc kubenswrapper[4768]: W0223 18:40:04.093244 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b420d54_c936_4147_8f04_18d8c91b1701.slice/crio-98c7e6be3a3d620fa7ae619e64c523ba18e25e294c0dc634b44afb6ff9ec7bb4 WatchSource:0}: Error finding container 98c7e6be3a3d620fa7ae619e64c523ba18e25e294c0dc634b44afb6ff9ec7bb4: Status 404 returned error can't find the container with id 98c7e6be3a3d620fa7ae619e64c523ba18e25e294c0dc634b44afb6ff9ec7bb4 Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.031395 4768 generic.go:334] "Generic (PLEG): container finished" podID="0b420d54-c936-4147-8f04-18d8c91b1701" containerID="89275b9a2198b8c4a8655ae869301adb1206e0524892f162436b533c5f7b29c3" exitCode=0 Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.031515 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w88p6" event={"ID":"0b420d54-c936-4147-8f04-18d8c91b1701","Type":"ContainerDied","Data":"89275b9a2198b8c4a8655ae869301adb1206e0524892f162436b533c5f7b29c3"} Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.031566 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w88p6" event={"ID":"0b420d54-c936-4147-8f04-18d8c91b1701","Type":"ContainerStarted","Data":"98c7e6be3a3d620fa7ae619e64c523ba18e25e294c0dc634b44afb6ff9ec7bb4"} Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.034765 4768 generic.go:334] "Generic (PLEG): container finished" podID="03532675-9efc-4d5c-ae55-5c9e1d240346" containerID="64e77c3e9416608b2edd38c9eaba38dccb3e1499cee3ef0b8dfd4fc4ae8aa706" exitCode=0 Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.035031 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5m2c" event={"ID":"03532675-9efc-4d5c-ae55-5c9e1d240346","Type":"ContainerDied","Data":"64e77c3e9416608b2edd38c9eaba38dccb3e1499cee3ef0b8dfd4fc4ae8aa706"} Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.689862 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lgjxp"] Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.692712 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.697547 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.700047 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lgjxp"] Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.741562 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx8kn\" (UniqueName: \"kubernetes.io/projected/f3ce0320-02ba-4678-aa24-65028a4a84a7-kube-api-access-kx8kn\") pod \"community-operators-lgjxp\" (UID: \"f3ce0320-02ba-4678-aa24-65028a4a84a7\") " pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.741955 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3ce0320-02ba-4678-aa24-65028a4a84a7-utilities\") pod \"community-operators-lgjxp\" (UID: \"f3ce0320-02ba-4678-aa24-65028a4a84a7\") " pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.742232 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3ce0320-02ba-4678-aa24-65028a4a84a7-catalog-content\") pod \"community-operators-lgjxp\" (UID: \"f3ce0320-02ba-4678-aa24-65028a4a84a7\") " pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.843327 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3ce0320-02ba-4678-aa24-65028a4a84a7-utilities\") pod \"community-operators-lgjxp\" (UID: \"f3ce0320-02ba-4678-aa24-65028a4a84a7\") " pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.843568 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3ce0320-02ba-4678-aa24-65028a4a84a7-catalog-content\") pod \"community-operators-lgjxp\" (UID: \"f3ce0320-02ba-4678-aa24-65028a4a84a7\") " pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.843769 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx8kn\" (UniqueName: \"kubernetes.io/projected/f3ce0320-02ba-4678-aa24-65028a4a84a7-kube-api-access-kx8kn\") pod \"community-operators-lgjxp\" (UID: \"f3ce0320-02ba-4678-aa24-65028a4a84a7\") " pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.844553 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3ce0320-02ba-4678-aa24-65028a4a84a7-catalog-content\") pod \"community-operators-lgjxp\" (UID: \"f3ce0320-02ba-4678-aa24-65028a4a84a7\") " pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.845069 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3ce0320-02ba-4678-aa24-65028a4a84a7-utilities\") pod \"community-operators-lgjxp\" (UID: \"f3ce0320-02ba-4678-aa24-65028a4a84a7\") " pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.885613 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx8kn\" (UniqueName: \"kubernetes.io/projected/f3ce0320-02ba-4678-aa24-65028a4a84a7-kube-api-access-kx8kn\") pod \"community-operators-lgjxp\" (UID: \"f3ce0320-02ba-4678-aa24-65028a4a84a7\") " pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.887979 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cnjln"] Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.899227 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.914164 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.924330 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnjln"] Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.944731 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhb8c\" (UniqueName: \"kubernetes.io/projected/039380f0-e2fb-42b8-a034-0ed97dc84cc5-kube-api-access-mhb8c\") pod \"redhat-marketplace-cnjln\" (UID: \"039380f0-e2fb-42b8-a034-0ed97dc84cc5\") " pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.944780 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/039380f0-e2fb-42b8-a034-0ed97dc84cc5-catalog-content\") pod \"redhat-marketplace-cnjln\" (UID: \"039380f0-e2fb-42b8-a034-0ed97dc84cc5\") " pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:05 crc kubenswrapper[4768]: I0223 18:40:05.944845 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/039380f0-e2fb-42b8-a034-0ed97dc84cc5-utilities\") pod \"redhat-marketplace-cnjln\" (UID: \"039380f0-e2fb-42b8-a034-0ed97dc84cc5\") " pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.020192 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.046176 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhb8c\" (UniqueName: \"kubernetes.io/projected/039380f0-e2fb-42b8-a034-0ed97dc84cc5-kube-api-access-mhb8c\") pod \"redhat-marketplace-cnjln\" (UID: \"039380f0-e2fb-42b8-a034-0ed97dc84cc5\") " pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.046521 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/039380f0-e2fb-42b8-a034-0ed97dc84cc5-catalog-content\") pod \"redhat-marketplace-cnjln\" (UID: \"039380f0-e2fb-42b8-a034-0ed97dc84cc5\") " pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.046589 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/039380f0-e2fb-42b8-a034-0ed97dc84cc5-utilities\") pod \"redhat-marketplace-cnjln\" (UID: \"039380f0-e2fb-42b8-a034-0ed97dc84cc5\") " pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.047474 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/039380f0-e2fb-42b8-a034-0ed97dc84cc5-utilities\") pod \"redhat-marketplace-cnjln\" (UID: \"039380f0-e2fb-42b8-a034-0ed97dc84cc5\") " pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.050617 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/039380f0-e2fb-42b8-a034-0ed97dc84cc5-catalog-content\") pod \"redhat-marketplace-cnjln\" (UID: \"039380f0-e2fb-42b8-a034-0ed97dc84cc5\") " pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.054235 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w88p6" event={"ID":"0b420d54-c936-4147-8f04-18d8c91b1701","Type":"ContainerStarted","Data":"c8e9760776e5fb98dabc1e45fb30a600b55c249595f4f63f7c76ac8dd2fde833"} Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.057121 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5m2c" event={"ID":"03532675-9efc-4d5c-ae55-5c9e1d240346","Type":"ContainerStarted","Data":"9b19f4c5d65ba834a090b5bf5a0feb2a1886e2708dddf539743d2f83b4b48a96"} Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.076132 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhb8c\" (UniqueName: \"kubernetes.io/projected/039380f0-e2fb-42b8-a034-0ed97dc84cc5-kube-api-access-mhb8c\") pod \"redhat-marketplace-cnjln\" (UID: \"039380f0-e2fb-42b8-a034-0ed97dc84cc5\") " pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.267489 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.497388 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lgjxp"] Feb 23 18:40:06 crc kubenswrapper[4768]: W0223 18:40:06.503443 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3ce0320_02ba_4678_aa24_65028a4a84a7.slice/crio-8ced9359b468338b253c7ed553fed2141e788352f8be2747fe4512ed2158a36e WatchSource:0}: Error finding container 8ced9359b468338b253c7ed553fed2141e788352f8be2747fe4512ed2158a36e: Status 404 returned error can't find the container with id 8ced9359b468338b253c7ed553fed2141e788352f8be2747fe4512ed2158a36e Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.667522 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cnjln"] Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.823003 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-6jj9v" Feb 23 18:40:06 crc kubenswrapper[4768]: I0223 18:40:06.881507 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jdbtb"] Feb 23 18:40:07 crc kubenswrapper[4768]: I0223 18:40:07.066935 4768 generic.go:334] "Generic (PLEG): container finished" podID="f3ce0320-02ba-4678-aa24-65028a4a84a7" containerID="0ba620c2e9d051960c58c1f0a9bb11b24cd7837bf401d9b30b3b2e86c5337d89" exitCode=0 Feb 23 18:40:07 crc kubenswrapper[4768]: I0223 18:40:07.067045 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgjxp" event={"ID":"f3ce0320-02ba-4678-aa24-65028a4a84a7","Type":"ContainerDied","Data":"0ba620c2e9d051960c58c1f0a9bb11b24cd7837bf401d9b30b3b2e86c5337d89"} Feb 23 18:40:07 crc kubenswrapper[4768]: I0223 18:40:07.067083 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgjxp" event={"ID":"f3ce0320-02ba-4678-aa24-65028a4a84a7","Type":"ContainerStarted","Data":"8ced9359b468338b253c7ed553fed2141e788352f8be2747fe4512ed2158a36e"} Feb 23 18:40:07 crc kubenswrapper[4768]: I0223 18:40:07.069520 4768 generic.go:334] "Generic (PLEG): container finished" podID="039380f0-e2fb-42b8-a034-0ed97dc84cc5" containerID="e8f3de74f1ac27e966ed20fdd16bf8afdeee8b22c8ad61b22be00c83fe74ae82" exitCode=0 Feb 23 18:40:07 crc kubenswrapper[4768]: I0223 18:40:07.069603 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnjln" event={"ID":"039380f0-e2fb-42b8-a034-0ed97dc84cc5","Type":"ContainerDied","Data":"e8f3de74f1ac27e966ed20fdd16bf8afdeee8b22c8ad61b22be00c83fe74ae82"} Feb 23 18:40:07 crc kubenswrapper[4768]: I0223 18:40:07.069629 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnjln" event={"ID":"039380f0-e2fb-42b8-a034-0ed97dc84cc5","Type":"ContainerStarted","Data":"44db6fe34d36f5b343fdbcab6f76f2ccf28112e50e9fd20e2609450ceadf47f4"} Feb 23 18:40:07 crc kubenswrapper[4768]: I0223 18:40:07.074294 4768 generic.go:334] "Generic (PLEG): container finished" podID="0b420d54-c936-4147-8f04-18d8c91b1701" containerID="c8e9760776e5fb98dabc1e45fb30a600b55c249595f4f63f7c76ac8dd2fde833" exitCode=0 Feb 23 18:40:07 crc kubenswrapper[4768]: I0223 18:40:07.074408 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w88p6" event={"ID":"0b420d54-c936-4147-8f04-18d8c91b1701","Type":"ContainerDied","Data":"c8e9760776e5fb98dabc1e45fb30a600b55c249595f4f63f7c76ac8dd2fde833"} Feb 23 18:40:07 crc kubenswrapper[4768]: I0223 18:40:07.078679 4768 generic.go:334] "Generic (PLEG): container finished" podID="03532675-9efc-4d5c-ae55-5c9e1d240346" containerID="9b19f4c5d65ba834a090b5bf5a0feb2a1886e2708dddf539743d2f83b4b48a96" exitCode=0 Feb 23 18:40:07 crc kubenswrapper[4768]: I0223 18:40:07.078745 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5m2c" event={"ID":"03532675-9efc-4d5c-ae55-5c9e1d240346","Type":"ContainerDied","Data":"9b19f4c5d65ba834a090b5bf5a0feb2a1886e2708dddf539743d2f83b4b48a96"} Feb 23 18:40:08 crc kubenswrapper[4768]: I0223 18:40:08.087224 4768 generic.go:334] "Generic (PLEG): container finished" podID="039380f0-e2fb-42b8-a034-0ed97dc84cc5" containerID="ba51542367a911a819449ca49008e759b4cfdd050bb27c87e3c49ce6cd3947a5" exitCode=0 Feb 23 18:40:08 crc kubenswrapper[4768]: I0223 18:40:08.087328 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnjln" event={"ID":"039380f0-e2fb-42b8-a034-0ed97dc84cc5","Type":"ContainerDied","Data":"ba51542367a911a819449ca49008e759b4cfdd050bb27c87e3c49ce6cd3947a5"} Feb 23 18:40:08 crc kubenswrapper[4768]: I0223 18:40:08.090676 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w88p6" event={"ID":"0b420d54-c936-4147-8f04-18d8c91b1701","Type":"ContainerStarted","Data":"6d729148e4bc96fec9188758d16d1a4d9eb87bf94f464416588f8e7f436eaaa2"} Feb 23 18:40:08 crc kubenswrapper[4768]: I0223 18:40:08.095641 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5m2c" event={"ID":"03532675-9efc-4d5c-ae55-5c9e1d240346","Type":"ContainerStarted","Data":"29227f0d0451b267458252036338e0ee868916056c713bb860f4b14a3718a664"} Feb 23 18:40:08 crc kubenswrapper[4768]: I0223 18:40:08.098775 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgjxp" event={"ID":"f3ce0320-02ba-4678-aa24-65028a4a84a7","Type":"ContainerStarted","Data":"4f45ac1d5f9f4a14e244b6c737cd45588422cf5093b63860398f64206780c8d2"} Feb 23 18:40:08 crc kubenswrapper[4768]: I0223 18:40:08.152797 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z5m2c" podStartSLOduration=2.6985856200000002 podStartE2EDuration="5.152772555s" podCreationTimestamp="2026-02-23 18:40:03 +0000 UTC" firstStartedPulling="2026-02-23 18:40:05.036758552 +0000 UTC m=+400.427244382" lastFinishedPulling="2026-02-23 18:40:07.490945507 +0000 UTC m=+402.881431317" observedRunningTime="2026-02-23 18:40:08.150255697 +0000 UTC m=+403.540741507" watchObservedRunningTime="2026-02-23 18:40:08.152772555 +0000 UTC m=+403.543258355" Feb 23 18:40:08 crc kubenswrapper[4768]: I0223 18:40:08.175171 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w88p6" podStartSLOduration=2.7331924389999998 podStartE2EDuration="5.175148705s" podCreationTimestamp="2026-02-23 18:40:03 +0000 UTC" firstStartedPulling="2026-02-23 18:40:05.033988878 +0000 UTC m=+400.424474718" lastFinishedPulling="2026-02-23 18:40:07.475945184 +0000 UTC m=+402.866430984" observedRunningTime="2026-02-23 18:40:08.174534998 +0000 UTC m=+403.565020808" watchObservedRunningTime="2026-02-23 18:40:08.175148705 +0000 UTC m=+403.565634505" Feb 23 18:40:09 crc kubenswrapper[4768]: I0223 18:40:09.107320 4768 generic.go:334] "Generic (PLEG): container finished" podID="f3ce0320-02ba-4678-aa24-65028a4a84a7" containerID="4f45ac1d5f9f4a14e244b6c737cd45588422cf5093b63860398f64206780c8d2" exitCode=0 Feb 23 18:40:09 crc kubenswrapper[4768]: I0223 18:40:09.107380 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgjxp" event={"ID":"f3ce0320-02ba-4678-aa24-65028a4a84a7","Type":"ContainerDied","Data":"4f45ac1d5f9f4a14e244b6c737cd45588422cf5093b63860398f64206780c8d2"} Feb 23 18:40:09 crc kubenswrapper[4768]: I0223 18:40:09.111449 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cnjln" event={"ID":"039380f0-e2fb-42b8-a034-0ed97dc84cc5","Type":"ContainerStarted","Data":"1b442ce27ed8b5c2d347cf83eac5a8705b8395ddf64ff861844d19d47839a26b"} Feb 23 18:40:09 crc kubenswrapper[4768]: I0223 18:40:09.170414 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cnjln" podStartSLOduration=2.758112811 podStartE2EDuration="4.170381123s" podCreationTimestamp="2026-02-23 18:40:05 +0000 UTC" firstStartedPulling="2026-02-23 18:40:07.071484038 +0000 UTC m=+402.461969878" lastFinishedPulling="2026-02-23 18:40:08.48375239 +0000 UTC m=+403.874238190" observedRunningTime="2026-02-23 18:40:09.162121782 +0000 UTC m=+404.552607602" watchObservedRunningTime="2026-02-23 18:40:09.170381123 +0000 UTC m=+404.560866953" Feb 23 18:40:09 crc kubenswrapper[4768]: I0223 18:40:09.548174 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:40:09 crc kubenswrapper[4768]: I0223 18:40:09.548459 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:40:10 crc kubenswrapper[4768]: I0223 18:40:10.120918 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lgjxp" event={"ID":"f3ce0320-02ba-4678-aa24-65028a4a84a7","Type":"ContainerStarted","Data":"cafbe85762f26290eab8b07f8bea348d658783f9bae38a2f1831b977e713ea81"} Feb 23 18:40:10 crc kubenswrapper[4768]: I0223 18:40:10.139587 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lgjxp" podStartSLOduration=2.738072803 podStartE2EDuration="5.139567464s" podCreationTimestamp="2026-02-23 18:40:05 +0000 UTC" firstStartedPulling="2026-02-23 18:40:07.069241637 +0000 UTC m=+402.459727447" lastFinishedPulling="2026-02-23 18:40:09.470736308 +0000 UTC m=+404.861222108" observedRunningTime="2026-02-23 18:40:10.136631376 +0000 UTC m=+405.527117196" watchObservedRunningTime="2026-02-23 18:40:10.139567464 +0000 UTC m=+405.530053284" Feb 23 18:40:13 crc kubenswrapper[4768]: I0223 18:40:13.442361 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:13 crc kubenswrapper[4768]: I0223 18:40:13.442893 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:13 crc kubenswrapper[4768]: I0223 18:40:13.625676 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:13 crc kubenswrapper[4768]: I0223 18:40:13.625754 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:13 crc kubenswrapper[4768]: I0223 18:40:13.690629 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:14 crc kubenswrapper[4768]: I0223 18:40:14.008135 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w88p6" Feb 23 18:40:14 crc kubenswrapper[4768]: I0223 18:40:14.495661 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z5m2c" podUID="03532675-9efc-4d5c-ae55-5c9e1d240346" containerName="registry-server" probeResult="failure" output=< Feb 23 18:40:14 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 23 18:40:14 crc kubenswrapper[4768]: > Feb 23 18:40:16 crc kubenswrapper[4768]: I0223 18:40:16.020824 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:16 crc kubenswrapper[4768]: I0223 18:40:16.022166 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:16 crc kubenswrapper[4768]: I0223 18:40:16.093047 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:16 crc kubenswrapper[4768]: I0223 18:40:16.269746 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:16 crc kubenswrapper[4768]: I0223 18:40:16.269834 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:16 crc kubenswrapper[4768]: I0223 18:40:16.320278 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:17 crc kubenswrapper[4768]: I0223 18:40:17.014858 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lgjxp" Feb 23 18:40:17 crc kubenswrapper[4768]: I0223 18:40:17.026602 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cnjln" Feb 23 18:40:23 crc kubenswrapper[4768]: I0223 18:40:23.491667 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:23 crc kubenswrapper[4768]: I0223 18:40:23.540968 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 18:40:31 crc kubenswrapper[4768]: I0223 18:40:31.931358 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" podUID="bee9fc28-f46f-41fe-86e9-b14cdead9120" containerName="registry" containerID="cri-o://dc0fd87c76df3f0965b5a2e11f81c1b8173c40630c9e8f9e0404e8b1c2f60207" gracePeriod=30 Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.096613 4768 generic.go:334] "Generic (PLEG): container finished" podID="bee9fc28-f46f-41fe-86e9-b14cdead9120" containerID="dc0fd87c76df3f0965b5a2e11f81c1b8173c40630c9e8f9e0404e8b1c2f60207" exitCode=0 Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.096681 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" event={"ID":"bee9fc28-f46f-41fe-86e9-b14cdead9120","Type":"ContainerDied","Data":"dc0fd87c76df3f0965b5a2e11f81c1b8173c40630c9e8f9e0404e8b1c2f60207"} Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.506811 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.578577 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"bee9fc28-f46f-41fe-86e9-b14cdead9120\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.578683 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bee9fc28-f46f-41fe-86e9-b14cdead9120-ca-trust-extracted\") pod \"bee9fc28-f46f-41fe-86e9-b14cdead9120\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.578749 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-bound-sa-token\") pod \"bee9fc28-f46f-41fe-86e9-b14cdead9120\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.578815 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-registry-tls\") pod \"bee9fc28-f46f-41fe-86e9-b14cdead9120\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.578882 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bee9fc28-f46f-41fe-86e9-b14cdead9120-trusted-ca\") pod \"bee9fc28-f46f-41fe-86e9-b14cdead9120\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.578996 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2b4w\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-kube-api-access-j2b4w\") pod \"bee9fc28-f46f-41fe-86e9-b14cdead9120\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.579072 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bee9fc28-f46f-41fe-86e9-b14cdead9120-registry-certificates\") pod \"bee9fc28-f46f-41fe-86e9-b14cdead9120\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.579325 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bee9fc28-f46f-41fe-86e9-b14cdead9120-installation-pull-secrets\") pod \"bee9fc28-f46f-41fe-86e9-b14cdead9120\" (UID: \"bee9fc28-f46f-41fe-86e9-b14cdead9120\") " Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.580597 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee9fc28-f46f-41fe-86e9-b14cdead9120-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bee9fc28-f46f-41fe-86e9-b14cdead9120" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.580772 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee9fc28-f46f-41fe-86e9-b14cdead9120-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bee9fc28-f46f-41fe-86e9-b14cdead9120" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.581471 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bee9fc28-f46f-41fe-86e9-b14cdead9120-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.581508 4768 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bee9fc28-f46f-41fe-86e9-b14cdead9120-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.590548 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bee9fc28-f46f-41fe-86e9-b14cdead9120" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.590953 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee9fc28-f46f-41fe-86e9-b14cdead9120-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bee9fc28-f46f-41fe-86e9-b14cdead9120" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.591762 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-kube-api-access-j2b4w" (OuterVolumeSpecName: "kube-api-access-j2b4w") pod "bee9fc28-f46f-41fe-86e9-b14cdead9120" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120"). InnerVolumeSpecName "kube-api-access-j2b4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.592906 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bee9fc28-f46f-41fe-86e9-b14cdead9120" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.596794 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "bee9fc28-f46f-41fe-86e9-b14cdead9120" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.612825 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bee9fc28-f46f-41fe-86e9-b14cdead9120-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bee9fc28-f46f-41fe-86e9-b14cdead9120" (UID: "bee9fc28-f46f-41fe-86e9-b14cdead9120"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.683382 4768 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bee9fc28-f46f-41fe-86e9-b14cdead9120-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.683454 4768 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.683478 4768 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.683499 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2b4w\" (UniqueName: \"kubernetes.io/projected/bee9fc28-f46f-41fe-86e9-b14cdead9120-kube-api-access-j2b4w\") on node \"crc\" DevicePath \"\"" Feb 23 18:40:32 crc kubenswrapper[4768]: I0223 18:40:32.683528 4768 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bee9fc28-f46f-41fe-86e9-b14cdead9120-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 23 18:40:33 crc kubenswrapper[4768]: I0223 18:40:33.108200 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" event={"ID":"bee9fc28-f46f-41fe-86e9-b14cdead9120","Type":"ContainerDied","Data":"5562d0898e94b52827ef30f29c946338f72348439a240d628de5890708f32857"} Feb 23 18:40:33 crc kubenswrapper[4768]: I0223 18:40:33.108333 4768 scope.go:117] "RemoveContainer" containerID="dc0fd87c76df3f0965b5a2e11f81c1b8173c40630c9e8f9e0404e8b1c2f60207" Feb 23 18:40:33 crc kubenswrapper[4768]: I0223 18:40:33.108330 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-jdbtb" Feb 23 18:40:33 crc kubenswrapper[4768]: I0223 18:40:33.166886 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jdbtb"] Feb 23 18:40:33 crc kubenswrapper[4768]: I0223 18:40:33.175272 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-jdbtb"] Feb 23 18:40:33 crc kubenswrapper[4768]: I0223 18:40:33.320867 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bee9fc28-f46f-41fe-86e9-b14cdead9120" path="/var/lib/kubelet/pods/bee9fc28-f46f-41fe-86e9-b14cdead9120/volumes" Feb 23 18:40:39 crc kubenswrapper[4768]: I0223 18:40:39.545856 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:40:39 crc kubenswrapper[4768]: I0223 18:40:39.546242 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:41:09 crc kubenswrapper[4768]: I0223 18:41:09.545688 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:41:09 crc kubenswrapper[4768]: I0223 18:41:09.546564 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:41:09 crc kubenswrapper[4768]: I0223 18:41:09.546653 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:41:09 crc kubenswrapper[4768]: I0223 18:41:09.547678 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"adca3adca094384b182d667ac8baf056e7660628e81b045a9a497d28c2962b81"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:41:09 crc kubenswrapper[4768]: I0223 18:41:09.547802 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://adca3adca094384b182d667ac8baf056e7660628e81b045a9a497d28c2962b81" gracePeriod=600 Feb 23 18:41:10 crc kubenswrapper[4768]: I0223 18:41:10.390522 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="adca3adca094384b182d667ac8baf056e7660628e81b045a9a497d28c2962b81" exitCode=0 Feb 23 18:41:10 crc kubenswrapper[4768]: I0223 18:41:10.390586 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"adca3adca094384b182d667ac8baf056e7660628e81b045a9a497d28c2962b81"} Feb 23 18:41:10 crc kubenswrapper[4768]: I0223 18:41:10.391517 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"09b667fa4dfa235f998d331776823655eb1fc751a363a9a542f56bfb1bf14fa1"} Feb 23 18:41:10 crc kubenswrapper[4768]: I0223 18:41:10.391560 4768 scope.go:117] "RemoveContainer" containerID="cbf1c2b7c2702f869cd85ce1dd29b1ce09f3dafe621129b477809506cd43835f" Feb 23 18:43:09 crc kubenswrapper[4768]: I0223 18:43:09.545121 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:43:09 crc kubenswrapper[4768]: I0223 18:43:09.545905 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.390771 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kbhg9"] Feb 23 18:43:29 crc kubenswrapper[4768]: E0223 18:43:29.391562 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee9fc28-f46f-41fe-86e9-b14cdead9120" containerName="registry" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.391576 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee9fc28-f46f-41fe-86e9-b14cdead9120" containerName="registry" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.391677 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee9fc28-f46f-41fe-86e9-b14cdead9120" containerName="registry" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.392097 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kbhg9" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.395704 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.395919 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.399407 4768 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-cn4x9" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.417305 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kbhg9"] Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.434080 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2pxdp"] Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.446192 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5xqnq"] Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.446486 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2pxdp" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.446980 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5xqnq" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.448493 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2pxdp"] Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.453999 4768 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tz6f7" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.454292 4768 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-q6gdg" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.475042 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5xqnq"] Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.495974 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w74rh\" (UniqueName: \"kubernetes.io/projected/2434360d-4475-492b-b0d6-d2105f2cf727-kube-api-access-w74rh\") pod \"cert-manager-cainjector-cf98fcc89-kbhg9\" (UID: \"2434360d-4475-492b-b0d6-d2105f2cf727\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kbhg9" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.597359 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h48x5\" (UniqueName: \"kubernetes.io/projected/ce41d193-31cd-4318-b8a6-9f0663e19dd1-kube-api-access-h48x5\") pod \"cert-manager-858654f9db-2pxdp\" (UID: \"ce41d193-31cd-4318-b8a6-9f0663e19dd1\") " pod="cert-manager/cert-manager-858654f9db-2pxdp" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.597451 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w74rh\" (UniqueName: \"kubernetes.io/projected/2434360d-4475-492b-b0d6-d2105f2cf727-kube-api-access-w74rh\") pod \"cert-manager-cainjector-cf98fcc89-kbhg9\" (UID: \"2434360d-4475-492b-b0d6-d2105f2cf727\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kbhg9" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.597759 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn9zc\" (UniqueName: \"kubernetes.io/projected/9e4e6814-0ed0-42f2-a94e-27bb939aa62f-kube-api-access-sn9zc\") pod \"cert-manager-webhook-687f57d79b-5xqnq\" (UID: \"9e4e6814-0ed0-42f2-a94e-27bb939aa62f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5xqnq" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.621609 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w74rh\" (UniqueName: \"kubernetes.io/projected/2434360d-4475-492b-b0d6-d2105f2cf727-kube-api-access-w74rh\") pod \"cert-manager-cainjector-cf98fcc89-kbhg9\" (UID: \"2434360d-4475-492b-b0d6-d2105f2cf727\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-kbhg9" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.698843 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn9zc\" (UniqueName: \"kubernetes.io/projected/9e4e6814-0ed0-42f2-a94e-27bb939aa62f-kube-api-access-sn9zc\") pod \"cert-manager-webhook-687f57d79b-5xqnq\" (UID: \"9e4e6814-0ed0-42f2-a94e-27bb939aa62f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5xqnq" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.698899 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h48x5\" (UniqueName: \"kubernetes.io/projected/ce41d193-31cd-4318-b8a6-9f0663e19dd1-kube-api-access-h48x5\") pod \"cert-manager-858654f9db-2pxdp\" (UID: \"ce41d193-31cd-4318-b8a6-9f0663e19dd1\") " pod="cert-manager/cert-manager-858654f9db-2pxdp" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.708396 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kbhg9" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.716301 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h48x5\" (UniqueName: \"kubernetes.io/projected/ce41d193-31cd-4318-b8a6-9f0663e19dd1-kube-api-access-h48x5\") pod \"cert-manager-858654f9db-2pxdp\" (UID: \"ce41d193-31cd-4318-b8a6-9f0663e19dd1\") " pod="cert-manager/cert-manager-858654f9db-2pxdp" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.717508 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn9zc\" (UniqueName: \"kubernetes.io/projected/9e4e6814-0ed0-42f2-a94e-27bb939aa62f-kube-api-access-sn9zc\") pod \"cert-manager-webhook-687f57d79b-5xqnq\" (UID: \"9e4e6814-0ed0-42f2-a94e-27bb939aa62f\") " pod="cert-manager/cert-manager-webhook-687f57d79b-5xqnq" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.776100 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2pxdp" Feb 23 18:43:29 crc kubenswrapper[4768]: I0223 18:43:29.800161 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-5xqnq" Feb 23 18:43:30 crc kubenswrapper[4768]: I0223 18:43:30.017794 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2pxdp"] Feb 23 18:43:30 crc kubenswrapper[4768]: I0223 18:43:30.030314 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:43:30 crc kubenswrapper[4768]: I0223 18:43:30.041754 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-kbhg9"] Feb 23 18:43:30 crc kubenswrapper[4768]: W0223 18:43:30.046903 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2434360d_4475_492b_b0d6_d2105f2cf727.slice/crio-065e980c887ee005d9048488a44a943dda6076e51b262f795969dfa8e5dd6d16 WatchSource:0}: Error finding container 065e980c887ee005d9048488a44a943dda6076e51b262f795969dfa8e5dd6d16: Status 404 returned error can't find the container with id 065e980c887ee005d9048488a44a943dda6076e51b262f795969dfa8e5dd6d16 Feb 23 18:43:30 crc kubenswrapper[4768]: I0223 18:43:30.078662 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-5xqnq"] Feb 23 18:43:30 crc kubenswrapper[4768]: W0223 18:43:30.086158 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e4e6814_0ed0_42f2_a94e_27bb939aa62f.slice/crio-712c89f7c8dd0907d546c491a6b3fef5e5171816f5cdf7dac39bba35adecc42a WatchSource:0}: Error finding container 712c89f7c8dd0907d546c491a6b3fef5e5171816f5cdf7dac39bba35adecc42a: Status 404 returned error can't find the container with id 712c89f7c8dd0907d546c491a6b3fef5e5171816f5cdf7dac39bba35adecc42a Feb 23 18:43:30 crc kubenswrapper[4768]: I0223 18:43:30.443310 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2pxdp" event={"ID":"ce41d193-31cd-4318-b8a6-9f0663e19dd1","Type":"ContainerStarted","Data":"8aebf39d075e61a611f5f5e721d2450ac6bf96dd75e95c4bb91fcfc85ad6c6a5"} Feb 23 18:43:30 crc kubenswrapper[4768]: I0223 18:43:30.445785 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kbhg9" event={"ID":"2434360d-4475-492b-b0d6-d2105f2cf727","Type":"ContainerStarted","Data":"065e980c887ee005d9048488a44a943dda6076e51b262f795969dfa8e5dd6d16"} Feb 23 18:43:30 crc kubenswrapper[4768]: I0223 18:43:30.447299 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5xqnq" event={"ID":"9e4e6814-0ed0-42f2-a94e-27bb939aa62f","Type":"ContainerStarted","Data":"712c89f7c8dd0907d546c491a6b3fef5e5171816f5cdf7dac39bba35adecc42a"} Feb 23 18:43:34 crc kubenswrapper[4768]: I0223 18:43:34.474407 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kbhg9" event={"ID":"2434360d-4475-492b-b0d6-d2105f2cf727","Type":"ContainerStarted","Data":"cae8ea0c8596ab948f2faa2f4d13a4ebc6e1f6a642109f4d9bd5f96364e0ffe0"} Feb 23 18:43:34 crc kubenswrapper[4768]: I0223 18:43:34.476035 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-5xqnq" event={"ID":"9e4e6814-0ed0-42f2-a94e-27bb939aa62f","Type":"ContainerStarted","Data":"fe99927bc456a654b04c077de7f07039b29ee0dbcda6b88c2c0e8ca3e1c85368"} Feb 23 18:43:34 crc kubenswrapper[4768]: I0223 18:43:34.476361 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-5xqnq" Feb 23 18:43:34 crc kubenswrapper[4768]: I0223 18:43:34.477754 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2pxdp" event={"ID":"ce41d193-31cd-4318-b8a6-9f0663e19dd1","Type":"ContainerStarted","Data":"6de2f85dd8a8b4c31a9afdc6d1766e6b7fd7f0400aa93cc61fc93d1a5f1542ad"} Feb 23 18:43:34 crc kubenswrapper[4768]: I0223 18:43:34.496579 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-kbhg9" podStartSLOduration=1.432378432 podStartE2EDuration="5.496561528s" podCreationTimestamp="2026-02-23 18:43:29 +0000 UTC" firstStartedPulling="2026-02-23 18:43:30.049131445 +0000 UTC m=+605.439617245" lastFinishedPulling="2026-02-23 18:43:34.113314511 +0000 UTC m=+609.503800341" observedRunningTime="2026-02-23 18:43:34.493657967 +0000 UTC m=+609.884143767" watchObservedRunningTime="2026-02-23 18:43:34.496561528 +0000 UTC m=+609.887047328" Feb 23 18:43:34 crc kubenswrapper[4768]: I0223 18:43:34.516881 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2pxdp" podStartSLOduration=1.43304107 podStartE2EDuration="5.516856006s" podCreationTimestamp="2026-02-23 18:43:29 +0000 UTC" firstStartedPulling="2026-02-23 18:43:30.028076575 +0000 UTC m=+605.418562365" lastFinishedPulling="2026-02-23 18:43:34.111891471 +0000 UTC m=+609.502377301" observedRunningTime="2026-02-23 18:43:34.515393295 +0000 UTC m=+609.905879115" watchObservedRunningTime="2026-02-23 18:43:34.516856006 +0000 UTC m=+609.907341816" Feb 23 18:43:34 crc kubenswrapper[4768]: I0223 18:43:34.539683 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-5xqnq" podStartSLOduration=1.418990284 podStartE2EDuration="5.539665573s" podCreationTimestamp="2026-02-23 18:43:29 +0000 UTC" firstStartedPulling="2026-02-23 18:43:30.08853995 +0000 UTC m=+605.479025750" lastFinishedPulling="2026-02-23 18:43:34.209215199 +0000 UTC m=+609.599701039" observedRunningTime="2026-02-23 18:43:34.537502744 +0000 UTC m=+609.927988544" watchObservedRunningTime="2026-02-23 18:43:34.539665573 +0000 UTC m=+609.930151373" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.142935 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbxnc"] Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.144433 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovn-controller" containerID="cri-o://f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3" gracePeriod=30 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.144465 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="sbdb" containerID="cri-o://bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721" gracePeriod=30 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.144525 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224" gracePeriod=30 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.144657 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovn-acl-logging" containerID="cri-o://6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96" gracePeriod=30 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.144618 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="kube-rbac-proxy-node" containerID="cri-o://1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450" gracePeriod=30 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.144745 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="nbdb" containerID="cri-o://8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c" gracePeriod=30 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.144759 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="northd" containerID="cri-o://e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53" gracePeriod=30 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.232936 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovnkube-controller" containerID="cri-o://24e883ed5968ad2aea0d730fc1c6b926281a7fe9bcc2898e80b8cbb9b2cb5f09" gracePeriod=30 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.511019 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rcq8b_1c7d1a60-c63e-4279-9ce9-4eea677d4a70/kube-multus/1.log" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.512092 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rcq8b_1c7d1a60-c63e-4279-9ce9-4eea677d4a70/kube-multus/0.log" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.512143 4768 generic.go:334] "Generic (PLEG): container finished" podID="1c7d1a60-c63e-4279-9ce9-4eea677d4a70" containerID="d3b7f73b42148e3f5e6ed0ffd0636c98340964ba5b2b7c0cb0970f40c037b49d" exitCode=2 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.512222 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rcq8b" event={"ID":"1c7d1a60-c63e-4279-9ce9-4eea677d4a70","Type":"ContainerDied","Data":"d3b7f73b42148e3f5e6ed0ffd0636c98340964ba5b2b7c0cb0970f40c037b49d"} Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.512291 4768 scope.go:117] "RemoveContainer" containerID="f9facf95ce28195896e7a8b85d30399112baaeafac572a7353ed621fc0c442e1" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.512851 4768 scope.go:117] "RemoveContainer" containerID="d3b7f73b42148e3f5e6ed0ffd0636c98340964ba5b2b7c0cb0970f40c037b49d" Feb 23 18:43:39 crc kubenswrapper[4768]: E0223 18:43:39.513132 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rcq8b_openshift-multus(1c7d1a60-c63e-4279-9ce9-4eea677d4a70)\"" pod="openshift-multus/multus-rcq8b" podUID="1c7d1a60-c63e-4279-9ce9-4eea677d4a70" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.516149 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovnkube-controller/2.log" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.518366 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovn-acl-logging/0.log" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.518947 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovn-controller/0.log" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.519897 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerID="24e883ed5968ad2aea0d730fc1c6b926281a7fe9bcc2898e80b8cbb9b2cb5f09" exitCode=0 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.519920 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerID="bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721" exitCode=0 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.519930 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerID="8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c" exitCode=0 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.519939 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerID="e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53" exitCode=0 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.519949 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerID="bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224" exitCode=0 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.519957 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerID="1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450" exitCode=0 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.519966 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerID="6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96" exitCode=143 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.519975 4768 generic.go:334] "Generic (PLEG): container finished" podID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerID="f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3" exitCode=143 Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.519994 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerDied","Data":"24e883ed5968ad2aea0d730fc1c6b926281a7fe9bcc2898e80b8cbb9b2cb5f09"} Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.520017 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerDied","Data":"bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721"} Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.520031 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerDied","Data":"8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c"} Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.520043 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerDied","Data":"e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53"} Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.520054 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerDied","Data":"bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224"} Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.520066 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerDied","Data":"1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450"} Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.520079 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerDied","Data":"6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96"} Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.520091 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerDied","Data":"f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3"} Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.520103 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" event={"ID":"dfa4db1d-97c7-44ee-be87-27167edeb9a9","Type":"ContainerDied","Data":"b280c1b893b47f2c87e37a7a37bd6f20531d139c139801c2cbb3c74e00bdd307"} Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.520116 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b280c1b893b47f2c87e37a7a37bd6f20531d139c139801c2cbb3c74e00bdd307" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.534678 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovnkube-controller/2.log" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.542911 4768 scope.go:117] "RemoveContainer" containerID="925434971ae59e0b140440f3ce9c7484b990f780d358ce86fd19453a7404bbfb" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.544861 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.544918 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.546936 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovn-acl-logging/0.log" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.547625 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovn-controller/0.log" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.548066 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.603948 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9r4zk"] Feb 23 18:43:39 crc kubenswrapper[4768]: E0223 18:43:39.604830 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="nbdb" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.605028 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="nbdb" Feb 23 18:43:39 crc kubenswrapper[4768]: E0223 18:43:39.605116 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.605130 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 18:43:39 crc kubenswrapper[4768]: E0223 18:43:39.605141 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovnkube-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.605149 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovnkube-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: E0223 18:43:39.605166 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="kube-rbac-proxy-node" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.605174 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="kube-rbac-proxy-node" Feb 23 18:43:39 crc kubenswrapper[4768]: E0223 18:43:39.605197 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovn-acl-logging" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.605205 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovn-acl-logging" Feb 23 18:43:39 crc kubenswrapper[4768]: E0223 18:43:39.605217 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovnkube-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.605224 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovnkube-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: E0223 18:43:39.605234 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovnkube-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.605240 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovnkube-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: E0223 18:43:39.605272 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="northd" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.605279 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="northd" Feb 23 18:43:39 crc kubenswrapper[4768]: E0223 18:43:39.605292 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="sbdb" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.605297 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="sbdb" Feb 23 18:43:39 crc kubenswrapper[4768]: E0223 18:43:39.605307 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovn-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.605317 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovn-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: E0223 18:43:39.605328 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="kubecfg-setup" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.605335 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="kubecfg-setup" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.606116 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="northd" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.606132 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="nbdb" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.606146 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="kube-rbac-proxy-ovn-metrics" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.606155 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="kube-rbac-proxy-node" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.606169 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovnkube-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.606176 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovnkube-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.606186 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovn-acl-logging" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.606196 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="sbdb" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.606203 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovnkube-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.606214 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovnkube-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.606226 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovn-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: E0223 18:43:39.607448 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovnkube-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.607463 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" containerName="ovnkube-controller" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.615386 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.658688 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-slash\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.658820 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-systemd\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.658817 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-slash" (OuterVolumeSpecName: "host-slash") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.658855 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-log-socket\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.658890 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-systemd-units\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.658922 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-run-ovn-kubernetes\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.658963 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-log-socket" (OuterVolumeSpecName: "log-socket") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.658972 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovn-node-metrics-cert\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.658993 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659012 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovnkube-script-lib\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659021 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659058 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-var-lib-openvswitch\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659097 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659157 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxnx4\" (UniqueName: \"kubernetes.io/projected/dfa4db1d-97c7-44ee-be87-27167edeb9a9-kube-api-access-gxnx4\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659192 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-openvswitch\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659234 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-run-netns\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659338 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-cni-netd\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659376 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-node-log\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659437 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-env-overrides\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659484 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-etc-openvswitch\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659517 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-cni-bin\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659553 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-kubelet\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659561 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659574 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659602 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovnkube-config\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659645 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659734 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659770 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659777 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-ovn\") pod \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\" (UID: \"dfa4db1d-97c7-44ee-be87-27167edeb9a9\") " Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659801 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659830 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659858 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659885 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-node-log" (OuterVolumeSpecName: "node-log") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.659991 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660115 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660242 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660409 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660477 4768 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660501 4768 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660514 4768 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660527 4768 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660541 4768 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-slash\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660554 4768 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-log-socket\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660565 4768 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660579 4768 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660595 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660608 4768 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660620 4768 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660634 4768 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660646 4768 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660688 4768 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660710 4768 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-node-log\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.660723 4768 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.664626 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.664650 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfa4db1d-97c7-44ee-be87-27167edeb9a9-kube-api-access-gxnx4" (OuterVolumeSpecName: "kube-api-access-gxnx4") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "kube-api-access-gxnx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.676424 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "dfa4db1d-97c7-44ee-be87-27167edeb9a9" (UID: "dfa4db1d-97c7-44ee-be87-27167edeb9a9"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.762204 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-etc-openvswitch\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.762295 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-cni-bin\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.762327 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1612a3f0-aa52-443e-89bb-d045469c7b96-ovn-node-metrics-cert\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.762485 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1612a3f0-aa52-443e-89bb-d045469c7b96-ovnkube-config\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.762538 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1612a3f0-aa52-443e-89bb-d045469c7b96-env-overrides\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.762721 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-run-openvswitch\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.762809 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-kubelet\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.762849 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1612a3f0-aa52-443e-89bb-d045469c7b96-ovnkube-script-lib\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.762891 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-systemd-units\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.762939 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-run-ovn\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.762974 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-node-log\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.763072 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.763152 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-var-lib-openvswitch\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.763205 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh688\" (UniqueName: \"kubernetes.io/projected/1612a3f0-aa52-443e-89bb-d045469c7b96-kube-api-access-zh688\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.763282 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-run-ovn-kubernetes\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.763327 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-run-systemd\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.763364 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-log-socket\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.763410 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-slash\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.763538 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-run-netns\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.763583 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-cni-netd\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.763690 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxnx4\" (UniqueName: \"kubernetes.io/projected/dfa4db1d-97c7-44ee-be87-27167edeb9a9-kube-api-access-gxnx4\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.763717 4768 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.763734 4768 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dfa4db1d-97c7-44ee-be87-27167edeb9a9-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.763749 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dfa4db1d-97c7-44ee-be87-27167edeb9a9-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.804018 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-5xqnq" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.864897 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-run-netns\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.865390 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-cni-netd\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.865560 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-etc-openvswitch\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.865722 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-cni-bin\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.865879 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1612a3f0-aa52-443e-89bb-d045469c7b96-ovn-node-metrics-cert\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.865988 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1612a3f0-aa52-443e-89bb-d045469c7b96-ovnkube-config\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.865631 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-etc-openvswitch\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.865484 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-cni-netd\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.865113 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-run-netns\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866080 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1612a3f0-aa52-443e-89bb-d045469c7b96-env-overrides\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866241 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-run-openvswitch\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.865796 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-cni-bin\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866320 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-kubelet\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866326 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-run-openvswitch\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866299 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-kubelet\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866491 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1612a3f0-aa52-443e-89bb-d045469c7b96-ovnkube-script-lib\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866551 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-systemd-units\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866621 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-run-ovn\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866672 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-node-log\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866750 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-run-ovn\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866757 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866812 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866827 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-node-log\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866859 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-var-lib-openvswitch\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866756 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-systemd-units\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866841 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-var-lib-openvswitch\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866922 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh688\" (UniqueName: \"kubernetes.io/projected/1612a3f0-aa52-443e-89bb-d045469c7b96-kube-api-access-zh688\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.866972 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-run-ovn-kubernetes\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.867018 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-run-systemd\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.867070 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-log-socket\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.867080 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-run-ovn-kubernetes\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.867122 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-slash\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.867129 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-run-systemd\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.867196 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-host-slash\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.867215 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1612a3f0-aa52-443e-89bb-d045469c7b96-log-socket\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.867552 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1612a3f0-aa52-443e-89bb-d045469c7b96-ovnkube-config\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.867834 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1612a3f0-aa52-443e-89bb-d045469c7b96-env-overrides\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.868059 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1612a3f0-aa52-443e-89bb-d045469c7b96-ovnkube-script-lib\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.869647 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1612a3f0-aa52-443e-89bb-d045469c7b96-ovn-node-metrics-cert\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.893671 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh688\" (UniqueName: \"kubernetes.io/projected/1612a3f0-aa52-443e-89bb-d045469c7b96-kube-api-access-zh688\") pod \"ovnkube-node-9r4zk\" (UID: \"1612a3f0-aa52-443e-89bb-d045469c7b96\") " pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: I0223 18:43:39.930515 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:39 crc kubenswrapper[4768]: W0223 18:43:39.965431 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1612a3f0_aa52_443e_89bb_d045469c7b96.slice/crio-104a754f57738562bdbe4f975d7b28a3f2002a0d3e1708c9aa59fc0ea223dd86 WatchSource:0}: Error finding container 104a754f57738562bdbe4f975d7b28a3f2002a0d3e1708c9aa59fc0ea223dd86: Status 404 returned error can't find the container with id 104a754f57738562bdbe4f975d7b28a3f2002a0d3e1708c9aa59fc0ea223dd86 Feb 23 18:43:40 crc kubenswrapper[4768]: I0223 18:43:40.533951 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovn-acl-logging/0.log" Feb 23 18:43:40 crc kubenswrapper[4768]: I0223 18:43:40.534867 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbxnc_dfa4db1d-97c7-44ee-be87-27167edeb9a9/ovn-controller/0.log" Feb 23 18:43:40 crc kubenswrapper[4768]: I0223 18:43:40.535806 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbxnc" Feb 23 18:43:40 crc kubenswrapper[4768]: I0223 18:43:40.538505 4768 generic.go:334] "Generic (PLEG): container finished" podID="1612a3f0-aa52-443e-89bb-d045469c7b96" containerID="4fa9d462ee260947d0d6e8c18d3c9be2cfd26e3b6afd4a4103b10be448c3079d" exitCode=0 Feb 23 18:43:40 crc kubenswrapper[4768]: I0223 18:43:40.538578 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" event={"ID":"1612a3f0-aa52-443e-89bb-d045469c7b96","Type":"ContainerDied","Data":"4fa9d462ee260947d0d6e8c18d3c9be2cfd26e3b6afd4a4103b10be448c3079d"} Feb 23 18:43:40 crc kubenswrapper[4768]: I0223 18:43:40.538803 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" event={"ID":"1612a3f0-aa52-443e-89bb-d045469c7b96","Type":"ContainerStarted","Data":"104a754f57738562bdbe4f975d7b28a3f2002a0d3e1708c9aa59fc0ea223dd86"} Feb 23 18:43:40 crc kubenswrapper[4768]: I0223 18:43:40.541678 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rcq8b_1c7d1a60-c63e-4279-9ce9-4eea677d4a70/kube-multus/1.log" Feb 23 18:43:40 crc kubenswrapper[4768]: I0223 18:43:40.650113 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbxnc"] Feb 23 18:43:40 crc kubenswrapper[4768]: I0223 18:43:40.654670 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbxnc"] Feb 23 18:43:41 crc kubenswrapper[4768]: I0223 18:43:41.315571 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfa4db1d-97c7-44ee-be87-27167edeb9a9" path="/var/lib/kubelet/pods/dfa4db1d-97c7-44ee-be87-27167edeb9a9/volumes" Feb 23 18:43:41 crc kubenswrapper[4768]: I0223 18:43:41.552090 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" event={"ID":"1612a3f0-aa52-443e-89bb-d045469c7b96","Type":"ContainerStarted","Data":"bbbeafbe506afb39192e189afac867f02dcbf2491f9311250ff3446e52d3ecd6"} Feb 23 18:43:41 crc kubenswrapper[4768]: I0223 18:43:41.552144 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" event={"ID":"1612a3f0-aa52-443e-89bb-d045469c7b96","Type":"ContainerStarted","Data":"77d80dbe20395bb179f0e4eb374a43fb987a744dfd02fa08a601142f698c5b38"} Feb 23 18:43:41 crc kubenswrapper[4768]: I0223 18:43:41.552159 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" event={"ID":"1612a3f0-aa52-443e-89bb-d045469c7b96","Type":"ContainerStarted","Data":"9f3dd2f065d73001971b2d65dcc3852f0e37196a355e5a024c417088b3417176"} Feb 23 18:43:41 crc kubenswrapper[4768]: I0223 18:43:41.552170 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" event={"ID":"1612a3f0-aa52-443e-89bb-d045469c7b96","Type":"ContainerStarted","Data":"2a292992226aa8971c16ac8bcea2aaeac80581b7bda233bcf2d6c2d805bc7308"} Feb 23 18:43:41 crc kubenswrapper[4768]: I0223 18:43:41.552185 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" event={"ID":"1612a3f0-aa52-443e-89bb-d045469c7b96","Type":"ContainerStarted","Data":"8939b5bfc36631ac567d419599b1612c420b89e7426ee150e9bb47dbf6ee818d"} Feb 23 18:43:41 crc kubenswrapper[4768]: I0223 18:43:41.552196 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" event={"ID":"1612a3f0-aa52-443e-89bb-d045469c7b96","Type":"ContainerStarted","Data":"29798888c8efa607090712259c0ce4af7502d99efc2f8f868ea7908132e1b705"} Feb 23 18:43:43 crc kubenswrapper[4768]: I0223 18:43:43.571878 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" event={"ID":"1612a3f0-aa52-443e-89bb-d045469c7b96","Type":"ContainerStarted","Data":"9957f932447997d8db265270727fd6285a2ba808544f460e031e6ae190eea682"} Feb 23 18:43:46 crc kubenswrapper[4768]: I0223 18:43:46.597065 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" event={"ID":"1612a3f0-aa52-443e-89bb-d045469c7b96","Type":"ContainerStarted","Data":"3575757cdb9460ae2ef89948576143c0360840272875ccd066699bfe966faa24"} Feb 23 18:43:46 crc kubenswrapper[4768]: I0223 18:43:46.597506 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:46 crc kubenswrapper[4768]: I0223 18:43:46.597537 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:46 crc kubenswrapper[4768]: I0223 18:43:46.642154 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" podStartSLOduration=7.642117596 podStartE2EDuration="7.642117596s" podCreationTimestamp="2026-02-23 18:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:43:46.634920809 +0000 UTC m=+622.025406659" watchObservedRunningTime="2026-02-23 18:43:46.642117596 +0000 UTC m=+622.032603436" Feb 23 18:43:46 crc kubenswrapper[4768]: I0223 18:43:46.693222 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:47 crc kubenswrapper[4768]: I0223 18:43:47.605302 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:47 crc kubenswrapper[4768]: I0223 18:43:47.649446 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:43:51 crc kubenswrapper[4768]: I0223 18:43:51.308180 4768 scope.go:117] "RemoveContainer" containerID="d3b7f73b42148e3f5e6ed0ffd0636c98340964ba5b2b7c0cb0970f40c037b49d" Feb 23 18:43:51 crc kubenswrapper[4768]: I0223 18:43:51.667522 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rcq8b_1c7d1a60-c63e-4279-9ce9-4eea677d4a70/kube-multus/1.log" Feb 23 18:43:51 crc kubenswrapper[4768]: I0223 18:43:51.667607 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rcq8b" event={"ID":"1c7d1a60-c63e-4279-9ce9-4eea677d4a70","Type":"ContainerStarted","Data":"a7aca8c0e2b8c8665f6f73283fc4b65cb627c685eae21803b8c27cf75af82c6e"} Feb 23 18:44:09 crc kubenswrapper[4768]: I0223 18:44:09.545353 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:44:09 crc kubenswrapper[4768]: I0223 18:44:09.546147 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:44:09 crc kubenswrapper[4768]: I0223 18:44:09.546224 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:44:09 crc kubenswrapper[4768]: I0223 18:44:09.547083 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09b667fa4dfa235f998d331776823655eb1fc751a363a9a542f56bfb1bf14fa1"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:44:09 crc kubenswrapper[4768]: I0223 18:44:09.547187 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://09b667fa4dfa235f998d331776823655eb1fc751a363a9a542f56bfb1bf14fa1" gracePeriod=600 Feb 23 18:44:09 crc kubenswrapper[4768]: I0223 18:44:09.821450 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="09b667fa4dfa235f998d331776823655eb1fc751a363a9a542f56bfb1bf14fa1" exitCode=0 Feb 23 18:44:09 crc kubenswrapper[4768]: I0223 18:44:09.821746 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"09b667fa4dfa235f998d331776823655eb1fc751a363a9a542f56bfb1bf14fa1"} Feb 23 18:44:09 crc kubenswrapper[4768]: I0223 18:44:09.821783 4768 scope.go:117] "RemoveContainer" containerID="adca3adca094384b182d667ac8baf056e7660628e81b045a9a497d28c2962b81" Feb 23 18:44:09 crc kubenswrapper[4768]: I0223 18:44:09.970149 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9r4zk" Feb 23 18:44:10 crc kubenswrapper[4768]: I0223 18:44:10.833361 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"662c0ef856356498cd584cb766a97a6b53369859da285f23355df329a456b4b9"} Feb 23 18:44:17 crc kubenswrapper[4768]: I0223 18:44:17.756809 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn"] Feb 23 18:44:17 crc kubenswrapper[4768]: I0223 18:44:17.759302 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" Feb 23 18:44:17 crc kubenswrapper[4768]: I0223 18:44:17.761645 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 23 18:44:17 crc kubenswrapper[4768]: I0223 18:44:17.770294 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn"] Feb 23 18:44:17 crc kubenswrapper[4768]: I0223 18:44:17.859192 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn\" (UID: \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" Feb 23 18:44:17 crc kubenswrapper[4768]: I0223 18:44:17.859282 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fthth\" (UniqueName: \"kubernetes.io/projected/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-kube-api-access-fthth\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn\" (UID: \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" Feb 23 18:44:17 crc kubenswrapper[4768]: I0223 18:44:17.859319 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn\" (UID: \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" Feb 23 18:44:17 crc kubenswrapper[4768]: I0223 18:44:17.961214 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn\" (UID: \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" Feb 23 18:44:17 crc kubenswrapper[4768]: I0223 18:44:17.961303 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fthth\" (UniqueName: \"kubernetes.io/projected/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-kube-api-access-fthth\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn\" (UID: \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" Feb 23 18:44:17 crc kubenswrapper[4768]: I0223 18:44:17.961336 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn\" (UID: \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" Feb 23 18:44:17 crc kubenswrapper[4768]: I0223 18:44:17.961890 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn\" (UID: \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" Feb 23 18:44:17 crc kubenswrapper[4768]: I0223 18:44:17.961920 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn\" (UID: \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" Feb 23 18:44:17 crc kubenswrapper[4768]: I0223 18:44:17.996891 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fthth\" (UniqueName: \"kubernetes.io/projected/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-kube-api-access-fthth\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn\" (UID: \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" Feb 23 18:44:18 crc kubenswrapper[4768]: I0223 18:44:18.087910 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" Feb 23 18:44:18 crc kubenswrapper[4768]: I0223 18:44:18.338099 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn"] Feb 23 18:44:18 crc kubenswrapper[4768]: I0223 18:44:18.892643 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" event={"ID":"e6bb516f-a8f7-417d-bc13-cca686ed2bdd","Type":"ContainerStarted","Data":"db66058ce52a868d88d00d907e5d559966879a1692932119776368c965d772ea"} Feb 23 18:44:18 crc kubenswrapper[4768]: I0223 18:44:18.892719 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" event={"ID":"e6bb516f-a8f7-417d-bc13-cca686ed2bdd","Type":"ContainerStarted","Data":"9ee688fb2f51617f5077f651fbee64ab239659910e11c5932d2d1f7bcb412755"} Feb 23 18:44:19 crc kubenswrapper[4768]: I0223 18:44:19.903498 4768 generic.go:334] "Generic (PLEG): container finished" podID="e6bb516f-a8f7-417d-bc13-cca686ed2bdd" containerID="db66058ce52a868d88d00d907e5d559966879a1692932119776368c965d772ea" exitCode=0 Feb 23 18:44:19 crc kubenswrapper[4768]: I0223 18:44:19.903554 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" event={"ID":"e6bb516f-a8f7-417d-bc13-cca686ed2bdd","Type":"ContainerDied","Data":"db66058ce52a868d88d00d907e5d559966879a1692932119776368c965d772ea"} Feb 23 18:44:21 crc kubenswrapper[4768]: I0223 18:44:21.923306 4768 generic.go:334] "Generic (PLEG): container finished" podID="e6bb516f-a8f7-417d-bc13-cca686ed2bdd" containerID="767598879b998d7c402b94ea0f7a5b6657df61a512211861163c4abd6eb1d2a0" exitCode=0 Feb 23 18:44:21 crc kubenswrapper[4768]: I0223 18:44:21.923466 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" event={"ID":"e6bb516f-a8f7-417d-bc13-cca686ed2bdd","Type":"ContainerDied","Data":"767598879b998d7c402b94ea0f7a5b6657df61a512211861163c4abd6eb1d2a0"} Feb 23 18:44:22 crc kubenswrapper[4768]: I0223 18:44:22.948533 4768 generic.go:334] "Generic (PLEG): container finished" podID="e6bb516f-a8f7-417d-bc13-cca686ed2bdd" containerID="c9918baaa7c3d23b455a6c083b7c5c7ce15876456f211c4b6e05d2c760c1918a" exitCode=0 Feb 23 18:44:22 crc kubenswrapper[4768]: I0223 18:44:22.948637 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" event={"ID":"e6bb516f-a8f7-417d-bc13-cca686ed2bdd","Type":"ContainerDied","Data":"c9918baaa7c3d23b455a6c083b7c5c7ce15876456f211c4b6e05d2c760c1918a"} Feb 23 18:44:24 crc kubenswrapper[4768]: I0223 18:44:24.299064 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" Feb 23 18:44:24 crc kubenswrapper[4768]: I0223 18:44:24.362868 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fthth\" (UniqueName: \"kubernetes.io/projected/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-kube-api-access-fthth\") pod \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\" (UID: \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\") " Feb 23 18:44:24 crc kubenswrapper[4768]: I0223 18:44:24.363012 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-util\") pod \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\" (UID: \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\") " Feb 23 18:44:24 crc kubenswrapper[4768]: I0223 18:44:24.363172 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-bundle\") pod \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\" (UID: \"e6bb516f-a8f7-417d-bc13-cca686ed2bdd\") " Feb 23 18:44:24 crc kubenswrapper[4768]: I0223 18:44:24.364322 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-bundle" (OuterVolumeSpecName: "bundle") pod "e6bb516f-a8f7-417d-bc13-cca686ed2bdd" (UID: "e6bb516f-a8f7-417d-bc13-cca686ed2bdd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:44:24 crc kubenswrapper[4768]: I0223 18:44:24.373474 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-util" (OuterVolumeSpecName: "util") pod "e6bb516f-a8f7-417d-bc13-cca686ed2bdd" (UID: "e6bb516f-a8f7-417d-bc13-cca686ed2bdd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:44:24 crc kubenswrapper[4768]: I0223 18:44:24.375074 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-kube-api-access-fthth" (OuterVolumeSpecName: "kube-api-access-fthth") pod "e6bb516f-a8f7-417d-bc13-cca686ed2bdd" (UID: "e6bb516f-a8f7-417d-bc13-cca686ed2bdd"). InnerVolumeSpecName "kube-api-access-fthth". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:44:24 crc kubenswrapper[4768]: I0223 18:44:24.465379 4768 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:44:24 crc kubenswrapper[4768]: I0223 18:44:24.465460 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fthth\" (UniqueName: \"kubernetes.io/projected/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-kube-api-access-fthth\") on node \"crc\" DevicePath \"\"" Feb 23 18:44:24 crc kubenswrapper[4768]: I0223 18:44:24.465492 4768 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e6bb516f-a8f7-417d-bc13-cca686ed2bdd-util\") on node \"crc\" DevicePath \"\"" Feb 23 18:44:24 crc kubenswrapper[4768]: I0223 18:44:24.975875 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" event={"ID":"e6bb516f-a8f7-417d-bc13-cca686ed2bdd","Type":"ContainerDied","Data":"9ee688fb2f51617f5077f651fbee64ab239659910e11c5932d2d1f7bcb412755"} Feb 23 18:44:24 crc kubenswrapper[4768]: I0223 18:44:24.975941 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ee688fb2f51617f5077f651fbee64ab239659910e11c5932d2d1f7bcb412755" Feb 23 18:44:24 crc kubenswrapper[4768]: I0223 18:44:24.975943 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn" Feb 23 18:44:25 crc kubenswrapper[4768]: I0223 18:44:25.741599 4768 scope.go:117] "RemoveContainer" containerID="bf497d9c9a3d543a8d9140e961ac1d9545a1c917eddf70949c05b58190375224" Feb 23 18:44:25 crc kubenswrapper[4768]: I0223 18:44:25.764714 4768 scope.go:117] "RemoveContainer" containerID="f8b794e8c42af7dc1202a79b106f7faee260a15b9ac9ae7fa1cacfa01574bfb3" Feb 23 18:44:25 crc kubenswrapper[4768]: I0223 18:44:25.790030 4768 scope.go:117] "RemoveContainer" containerID="24e883ed5968ad2aea0d730fc1c6b926281a7fe9bcc2898e80b8cbb9b2cb5f09" Feb 23 18:44:25 crc kubenswrapper[4768]: I0223 18:44:25.817671 4768 scope.go:117] "RemoveContainer" containerID="932c555ee50581f84cb4eb6b4eb8b09df4615e432518841c8e19229826927f7e" Feb 23 18:44:25 crc kubenswrapper[4768]: I0223 18:44:25.837971 4768 scope.go:117] "RemoveContainer" containerID="6c4d3c222e24dad86c205a154f50a54b0d6feeee40f1bc7183719f3b9d189e96" Feb 23 18:44:25 crc kubenswrapper[4768]: I0223 18:44:25.872961 4768 scope.go:117] "RemoveContainer" containerID="8390bb8b1ff0d41fd0fa36f8722c138bff012bd57aeddaa72f7c3acca6d0d36c" Feb 23 18:44:25 crc kubenswrapper[4768]: I0223 18:44:25.897915 4768 scope.go:117] "RemoveContainer" containerID="1b4dfcddc1bb731f7a26715f7e971c18238340aeb4d9bafd8f96b903edba6450" Feb 23 18:44:25 crc kubenswrapper[4768]: I0223 18:44:25.916659 4768 scope.go:117] "RemoveContainer" containerID="bc25f88de9697ddd3503a48b8d2deea67abfd004fef27903e3d7ea4a20172721" Feb 23 18:44:25 crc kubenswrapper[4768]: I0223 18:44:25.935751 4768 scope.go:117] "RemoveContainer" containerID="e7ea41e0af884e20b71fdf99fd9924fc88394ae37a5e37c493a984e5477b1a53" Feb 23 18:44:29 crc kubenswrapper[4768]: I0223 18:44:29.772401 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-twf42"] Feb 23 18:44:29 crc kubenswrapper[4768]: E0223 18:44:29.772735 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6bb516f-a8f7-417d-bc13-cca686ed2bdd" containerName="util" Feb 23 18:44:29 crc kubenswrapper[4768]: I0223 18:44:29.772758 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6bb516f-a8f7-417d-bc13-cca686ed2bdd" containerName="util" Feb 23 18:44:29 crc kubenswrapper[4768]: E0223 18:44:29.772782 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6bb516f-a8f7-417d-bc13-cca686ed2bdd" containerName="extract" Feb 23 18:44:29 crc kubenswrapper[4768]: I0223 18:44:29.772794 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6bb516f-a8f7-417d-bc13-cca686ed2bdd" containerName="extract" Feb 23 18:44:29 crc kubenswrapper[4768]: E0223 18:44:29.772831 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6bb516f-a8f7-417d-bc13-cca686ed2bdd" containerName="pull" Feb 23 18:44:29 crc kubenswrapper[4768]: I0223 18:44:29.772843 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6bb516f-a8f7-417d-bc13-cca686ed2bdd" containerName="pull" Feb 23 18:44:29 crc kubenswrapper[4768]: I0223 18:44:29.773038 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6bb516f-a8f7-417d-bc13-cca686ed2bdd" containerName="extract" Feb 23 18:44:29 crc kubenswrapper[4768]: I0223 18:44:29.773754 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-twf42" Feb 23 18:44:29 crc kubenswrapper[4768]: I0223 18:44:29.776157 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-27g2b" Feb 23 18:44:29 crc kubenswrapper[4768]: I0223 18:44:29.776432 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 23 18:44:29 crc kubenswrapper[4768]: I0223 18:44:29.777841 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 23 18:44:29 crc kubenswrapper[4768]: I0223 18:44:29.792311 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-twf42"] Feb 23 18:44:29 crc kubenswrapper[4768]: I0223 18:44:29.836189 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2wwj\" (UniqueName: \"kubernetes.io/projected/13c778fb-2aa4-4078-8393-45d0334de750-kube-api-access-j2wwj\") pod \"nmstate-operator-694c9596b7-twf42\" (UID: \"13c778fb-2aa4-4078-8393-45d0334de750\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-twf42" Feb 23 18:44:29 crc kubenswrapper[4768]: I0223 18:44:29.937692 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2wwj\" (UniqueName: \"kubernetes.io/projected/13c778fb-2aa4-4078-8393-45d0334de750-kube-api-access-j2wwj\") pod \"nmstate-operator-694c9596b7-twf42\" (UID: \"13c778fb-2aa4-4078-8393-45d0334de750\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-twf42" Feb 23 18:44:29 crc kubenswrapper[4768]: I0223 18:44:29.972922 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2wwj\" (UniqueName: \"kubernetes.io/projected/13c778fb-2aa4-4078-8393-45d0334de750-kube-api-access-j2wwj\") pod \"nmstate-operator-694c9596b7-twf42\" (UID: \"13c778fb-2aa4-4078-8393-45d0334de750\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-twf42" Feb 23 18:44:30 crc kubenswrapper[4768]: I0223 18:44:30.097088 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-twf42" Feb 23 18:44:30 crc kubenswrapper[4768]: I0223 18:44:30.347541 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-twf42"] Feb 23 18:44:31 crc kubenswrapper[4768]: I0223 18:44:31.022206 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-twf42" event={"ID":"13c778fb-2aa4-4078-8393-45d0334de750","Type":"ContainerStarted","Data":"13961e6510f163150e2adcfa330868b1d0f1ddab071a946e7bead868b6b5e354"} Feb 23 18:44:33 crc kubenswrapper[4768]: I0223 18:44:33.045646 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-twf42" event={"ID":"13c778fb-2aa4-4078-8393-45d0334de750","Type":"ContainerStarted","Data":"ad316b8bd62f2fffd3d3a1afa5d725f49476b60d59619c15fb3eb9034e31d4f7"} Feb 23 18:44:33 crc kubenswrapper[4768]: I0223 18:44:33.076098 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-twf42" podStartSLOduration=1.7886311940000001 podStartE2EDuration="4.076067973s" podCreationTimestamp="2026-02-23 18:44:29 +0000 UTC" firstStartedPulling="2026-02-23 18:44:30.356007931 +0000 UTC m=+665.746493731" lastFinishedPulling="2026-02-23 18:44:32.64344471 +0000 UTC m=+668.033930510" observedRunningTime="2026-02-23 18:44:33.070368878 +0000 UTC m=+668.460854678" watchObservedRunningTime="2026-02-23 18:44:33.076067973 +0000 UTC m=+668.466553823" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.435443 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-7sgf5"] Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.436496 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7sgf5" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.452610 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv"] Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.453610 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2tsrz" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.453747 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.460914 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-7sgf5"] Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.464090 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.468506 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv"] Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.471169 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckt9\" (UniqueName: \"kubernetes.io/projected/405a4831-883b-4d37-9b41-50b60a1268bf-kube-api-access-gckt9\") pod \"nmstate-metrics-58c85c668d-7sgf5\" (UID: \"405a4831-883b-4d37-9b41-50b60a1268bf\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-7sgf5" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.479705 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-sq2t7"] Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.482719 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.572448 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gckt9\" (UniqueName: \"kubernetes.io/projected/405a4831-883b-4d37-9b41-50b60a1268bf-kube-api-access-gckt9\") pod \"nmstate-metrics-58c85c668d-7sgf5\" (UID: \"405a4831-883b-4d37-9b41-50b60a1268bf\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-7sgf5" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.572492 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nlhr\" (UniqueName: \"kubernetes.io/projected/34f1b59b-1b5b-4093-bf9b-97d19e3118e2-kube-api-access-7nlhr\") pod \"nmstate-handler-sq2t7\" (UID: \"34f1b59b-1b5b-4093-bf9b-97d19e3118e2\") " pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.572523 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f792bcb6-c414-4f4a-ae75-528cbe81b29d-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-w8cwv\" (UID: \"f792bcb6-c414-4f4a-ae75-528cbe81b29d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.572558 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcbg2\" (UniqueName: \"kubernetes.io/projected/f792bcb6-c414-4f4a-ae75-528cbe81b29d-kube-api-access-hcbg2\") pod \"nmstate-webhook-866bcb46dc-w8cwv\" (UID: \"f792bcb6-c414-4f4a-ae75-528cbe81b29d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.572577 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/34f1b59b-1b5b-4093-bf9b-97d19e3118e2-ovs-socket\") pod \"nmstate-handler-sq2t7\" (UID: \"34f1b59b-1b5b-4093-bf9b-97d19e3118e2\") " pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.572611 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/34f1b59b-1b5b-4093-bf9b-97d19e3118e2-nmstate-lock\") pod \"nmstate-handler-sq2t7\" (UID: \"34f1b59b-1b5b-4093-bf9b-97d19e3118e2\") " pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.572637 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/34f1b59b-1b5b-4093-bf9b-97d19e3118e2-dbus-socket\") pod \"nmstate-handler-sq2t7\" (UID: \"34f1b59b-1b5b-4093-bf9b-97d19e3118e2\") " pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.576746 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl"] Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.577794 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.580442 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.580554 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-dk28f" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.584596 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.588364 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl"] Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.599807 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gckt9\" (UniqueName: \"kubernetes.io/projected/405a4831-883b-4d37-9b41-50b60a1268bf-kube-api-access-gckt9\") pod \"nmstate-metrics-58c85c668d-7sgf5\" (UID: \"405a4831-883b-4d37-9b41-50b60a1268bf\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-7sgf5" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.673402 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/34f1b59b-1b5b-4093-bf9b-97d19e3118e2-nmstate-lock\") pod \"nmstate-handler-sq2t7\" (UID: \"34f1b59b-1b5b-4093-bf9b-97d19e3118e2\") " pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.673467 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvjcz\" (UniqueName: \"kubernetes.io/projected/37f3006a-1eda-448a-9a9a-77dd20f51534-kube-api-access-tvjcz\") pod \"nmstate-console-plugin-5c78fc5d65-dstrl\" (UID: \"37f3006a-1eda-448a-9a9a-77dd20f51534\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.673500 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/37f3006a-1eda-448a-9a9a-77dd20f51534-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dstrl\" (UID: \"37f3006a-1eda-448a-9a9a-77dd20f51534\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.673507 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/34f1b59b-1b5b-4093-bf9b-97d19e3118e2-nmstate-lock\") pod \"nmstate-handler-sq2t7\" (UID: \"34f1b59b-1b5b-4093-bf9b-97d19e3118e2\") " pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.673549 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/34f1b59b-1b5b-4093-bf9b-97d19e3118e2-dbus-socket\") pod \"nmstate-handler-sq2t7\" (UID: \"34f1b59b-1b5b-4093-bf9b-97d19e3118e2\") " pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.673594 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nlhr\" (UniqueName: \"kubernetes.io/projected/34f1b59b-1b5b-4093-bf9b-97d19e3118e2-kube-api-access-7nlhr\") pod \"nmstate-handler-sq2t7\" (UID: \"34f1b59b-1b5b-4093-bf9b-97d19e3118e2\") " pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.673659 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f792bcb6-c414-4f4a-ae75-528cbe81b29d-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-w8cwv\" (UID: \"f792bcb6-c414-4f4a-ae75-528cbe81b29d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.673729 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcbg2\" (UniqueName: \"kubernetes.io/projected/f792bcb6-c414-4f4a-ae75-528cbe81b29d-kube-api-access-hcbg2\") pod \"nmstate-webhook-866bcb46dc-w8cwv\" (UID: \"f792bcb6-c414-4f4a-ae75-528cbe81b29d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.673760 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/37f3006a-1eda-448a-9a9a-77dd20f51534-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dstrl\" (UID: \"37f3006a-1eda-448a-9a9a-77dd20f51534\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.673809 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/34f1b59b-1b5b-4093-bf9b-97d19e3118e2-ovs-socket\") pod \"nmstate-handler-sq2t7\" (UID: \"34f1b59b-1b5b-4093-bf9b-97d19e3118e2\") " pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.673912 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/34f1b59b-1b5b-4093-bf9b-97d19e3118e2-dbus-socket\") pod \"nmstate-handler-sq2t7\" (UID: \"34f1b59b-1b5b-4093-bf9b-97d19e3118e2\") " pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.673916 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/34f1b59b-1b5b-4093-bf9b-97d19e3118e2-ovs-socket\") pod \"nmstate-handler-sq2t7\" (UID: \"34f1b59b-1b5b-4093-bf9b-97d19e3118e2\") " pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.679112 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f792bcb6-c414-4f4a-ae75-528cbe81b29d-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-w8cwv\" (UID: \"f792bcb6-c414-4f4a-ae75-528cbe81b29d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.699508 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcbg2\" (UniqueName: \"kubernetes.io/projected/f792bcb6-c414-4f4a-ae75-528cbe81b29d-kube-api-access-hcbg2\") pod \"nmstate-webhook-866bcb46dc-w8cwv\" (UID: \"f792bcb6-c414-4f4a-ae75-528cbe81b29d\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.707012 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nlhr\" (UniqueName: \"kubernetes.io/projected/34f1b59b-1b5b-4093-bf9b-97d19e3118e2-kube-api-access-7nlhr\") pod \"nmstate-handler-sq2t7\" (UID: \"34f1b59b-1b5b-4093-bf9b-97d19e3118e2\") " pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.762523 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7sgf5" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.771788 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.775362 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvjcz\" (UniqueName: \"kubernetes.io/projected/37f3006a-1eda-448a-9a9a-77dd20f51534-kube-api-access-tvjcz\") pod \"nmstate-console-plugin-5c78fc5d65-dstrl\" (UID: \"37f3006a-1eda-448a-9a9a-77dd20f51534\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.775411 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/37f3006a-1eda-448a-9a9a-77dd20f51534-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dstrl\" (UID: \"37f3006a-1eda-448a-9a9a-77dd20f51534\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.775516 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/37f3006a-1eda-448a-9a9a-77dd20f51534-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dstrl\" (UID: \"37f3006a-1eda-448a-9a9a-77dd20f51534\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.776694 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/37f3006a-1eda-448a-9a9a-77dd20f51534-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dstrl\" (UID: \"37f3006a-1eda-448a-9a9a-77dd20f51534\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.783055 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/37f3006a-1eda-448a-9a9a-77dd20f51534-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dstrl\" (UID: \"37f3006a-1eda-448a-9a9a-77dd20f51534\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.800424 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvjcz\" (UniqueName: \"kubernetes.io/projected/37f3006a-1eda-448a-9a9a-77dd20f51534-kube-api-access-tvjcz\") pod \"nmstate-console-plugin-5c78fc5d65-dstrl\" (UID: \"37f3006a-1eda-448a-9a9a-77dd20f51534\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.803153 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.809710 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-58dbf884bb-kbxv7"] Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.810629 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.826783 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58dbf884bb-kbxv7"] Feb 23 18:44:38 crc kubenswrapper[4768]: W0223 18:44:38.865922 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34f1b59b_1b5b_4093_bf9b_97d19e3118e2.slice/crio-180e45eb3008612b47b13c524d49931830bf35d2fbc9068e35ab54f9c4fd1f76 WatchSource:0}: Error finding container 180e45eb3008612b47b13c524d49931830bf35d2fbc9068e35ab54f9c4fd1f76: Status 404 returned error can't find the container with id 180e45eb3008612b47b13c524d49931830bf35d2fbc9068e35ab54f9c4fd1f76 Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.879605 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-oauth-serving-cert\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.879652 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-trusted-ca-bundle\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.879709 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-console-serving-cert\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.879732 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-service-ca\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.879765 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rw7t\" (UniqueName: \"kubernetes.io/projected/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-kube-api-access-9rw7t\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.879787 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-console-oauth-config\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.879816 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-console-config\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.915944 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.981517 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-console-oauth-config\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.981608 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-console-config\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.981703 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-oauth-serving-cert\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.981728 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-trusted-ca-bundle\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.981813 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-console-serving-cert\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.981832 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-service-ca\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.981883 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rw7t\" (UniqueName: \"kubernetes.io/projected/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-kube-api-access-9rw7t\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.983932 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-trusted-ca-bundle\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.986056 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-service-ca\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.986773 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-oauth-serving-cert\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.987317 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-console-config\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.987518 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-console-oauth-config\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:38 crc kubenswrapper[4768]: I0223 18:44:38.988220 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-console-serving-cert\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:39 crc kubenswrapper[4768]: I0223 18:44:39.001984 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rw7t\" (UniqueName: \"kubernetes.io/projected/d9318f70-9d5e-4c6a-9d33-db1cf33b707d-kube-api-access-9rw7t\") pod \"console-58dbf884bb-kbxv7\" (UID: \"d9318f70-9d5e-4c6a-9d33-db1cf33b707d\") " pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:39 crc kubenswrapper[4768]: I0223 18:44:39.022876 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv"] Feb 23 18:44:39 crc kubenswrapper[4768]: I0223 18:44:39.090118 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-7sgf5"] Feb 23 18:44:39 crc kubenswrapper[4768]: I0223 18:44:39.117124 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv" event={"ID":"f792bcb6-c414-4f4a-ae75-528cbe81b29d","Type":"ContainerStarted","Data":"6c726bff7a8738b303d6f69e2ef179210c2f635e8930e1f0cd85ebf6328e188e"} Feb 23 18:44:39 crc kubenswrapper[4768]: I0223 18:44:39.119206 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sq2t7" event={"ID":"34f1b59b-1b5b-4093-bf9b-97d19e3118e2","Type":"ContainerStarted","Data":"180e45eb3008612b47b13c524d49931830bf35d2fbc9068e35ab54f9c4fd1f76"} Feb 23 18:44:39 crc kubenswrapper[4768]: I0223 18:44:39.127836 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7sgf5" event={"ID":"405a4831-883b-4d37-9b41-50b60a1268bf","Type":"ContainerStarted","Data":"db35c6b76b57b00a89fdddfa1961284e6f9dc09f9d717274806b56f06ca8a1ca"} Feb 23 18:44:39 crc kubenswrapper[4768]: I0223 18:44:39.128610 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl"] Feb 23 18:44:39 crc kubenswrapper[4768]: W0223 18:44:39.136075 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37f3006a_1eda_448a_9a9a_77dd20f51534.slice/crio-6aefd9939fad525ddfd45f226241a82514fc55e1c4f46e3d06b4a519a87853d4 WatchSource:0}: Error finding container 6aefd9939fad525ddfd45f226241a82514fc55e1c4f46e3d06b4a519a87853d4: Status 404 returned error can't find the container with id 6aefd9939fad525ddfd45f226241a82514fc55e1c4f46e3d06b4a519a87853d4 Feb 23 18:44:39 crc kubenswrapper[4768]: I0223 18:44:39.182775 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:39 crc kubenswrapper[4768]: I0223 18:44:39.378740 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-58dbf884bb-kbxv7"] Feb 23 18:44:39 crc kubenswrapper[4768]: W0223 18:44:39.388729 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9318f70_9d5e_4c6a_9d33_db1cf33b707d.slice/crio-b76bcf142c87e29745ef181ef1f78ab66d82942665382e70e32b7fd52a243d37 WatchSource:0}: Error finding container b76bcf142c87e29745ef181ef1f78ab66d82942665382e70e32b7fd52a243d37: Status 404 returned error can't find the container with id b76bcf142c87e29745ef181ef1f78ab66d82942665382e70e32b7fd52a243d37 Feb 23 18:44:40 crc kubenswrapper[4768]: I0223 18:44:40.138349 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" event={"ID":"37f3006a-1eda-448a-9a9a-77dd20f51534","Type":"ContainerStarted","Data":"6aefd9939fad525ddfd45f226241a82514fc55e1c4f46e3d06b4a519a87853d4"} Feb 23 18:44:40 crc kubenswrapper[4768]: I0223 18:44:40.144103 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58dbf884bb-kbxv7" event={"ID":"d9318f70-9d5e-4c6a-9d33-db1cf33b707d","Type":"ContainerStarted","Data":"06ac855dedd9f9469a1370b0055c777a2296c56fa2894b4ccae448cdfc7a9097"} Feb 23 18:44:40 crc kubenswrapper[4768]: I0223 18:44:40.144174 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-58dbf884bb-kbxv7" event={"ID":"d9318f70-9d5e-4c6a-9d33-db1cf33b707d","Type":"ContainerStarted","Data":"b76bcf142c87e29745ef181ef1f78ab66d82942665382e70e32b7fd52a243d37"} Feb 23 18:44:40 crc kubenswrapper[4768]: I0223 18:44:40.176865 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-58dbf884bb-kbxv7" podStartSLOduration=2.176833088 podStartE2EDuration="2.176833088s" podCreationTimestamp="2026-02-23 18:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:44:40.176313064 +0000 UTC m=+675.566798904" watchObservedRunningTime="2026-02-23 18:44:40.176833088 +0000 UTC m=+675.567318898" Feb 23 18:44:43 crc kubenswrapper[4768]: I0223 18:44:43.170505 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv" event={"ID":"f792bcb6-c414-4f4a-ae75-528cbe81b29d","Type":"ContainerStarted","Data":"8bcafb96bc8d98ea3a8e2300caf4e6b7077c6e97b831f2d0f2550c28c9d92065"} Feb 23 18:44:43 crc kubenswrapper[4768]: I0223 18:44:43.171164 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv" Feb 23 18:44:43 crc kubenswrapper[4768]: I0223 18:44:43.173322 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-sq2t7" event={"ID":"34f1b59b-1b5b-4093-bf9b-97d19e3118e2","Type":"ContainerStarted","Data":"1e3c5f190500038cf36ec80488149eec6d6421ece7d5d584dec49280086eb58d"} Feb 23 18:44:43 crc kubenswrapper[4768]: I0223 18:44:43.173587 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:43 crc kubenswrapper[4768]: I0223 18:44:43.177164 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" event={"ID":"37f3006a-1eda-448a-9a9a-77dd20f51534","Type":"ContainerStarted","Data":"b4f1aa6d4af34be96ea77b9cf36613c2da5ba456a625060cf47d03c1ff7f0937"} Feb 23 18:44:43 crc kubenswrapper[4768]: I0223 18:44:43.181196 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7sgf5" event={"ID":"405a4831-883b-4d37-9b41-50b60a1268bf","Type":"ContainerStarted","Data":"ce4e23b3956c0274a8395b5f043bf2ffe9fae4c77dcfdfed2c40c52e029cc057"} Feb 23 18:44:43 crc kubenswrapper[4768]: I0223 18:44:43.201668 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv" podStartSLOduration=2.158708135 podStartE2EDuration="5.201632023s" podCreationTimestamp="2026-02-23 18:44:38 +0000 UTC" firstStartedPulling="2026-02-23 18:44:39.034821941 +0000 UTC m=+674.425307741" lastFinishedPulling="2026-02-23 18:44:42.077745809 +0000 UTC m=+677.468231629" observedRunningTime="2026-02-23 18:44:43.195807435 +0000 UTC m=+678.586293225" watchObservedRunningTime="2026-02-23 18:44:43.201632023 +0000 UTC m=+678.592117863" Feb 23 18:44:43 crc kubenswrapper[4768]: I0223 18:44:43.213915 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-sq2t7" podStartSLOduration=2.003918922 podStartE2EDuration="5.213885016s" podCreationTimestamp="2026-02-23 18:44:38 +0000 UTC" firstStartedPulling="2026-02-23 18:44:38.868239377 +0000 UTC m=+674.258725167" lastFinishedPulling="2026-02-23 18:44:42.078205441 +0000 UTC m=+677.468691261" observedRunningTime="2026-02-23 18:44:43.212771326 +0000 UTC m=+678.603257126" watchObservedRunningTime="2026-02-23 18:44:43.213885016 +0000 UTC m=+678.604370846" Feb 23 18:44:43 crc kubenswrapper[4768]: I0223 18:44:43.242049 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dstrl" podStartSLOduration=2.30737641 podStartE2EDuration="5.242024702s" podCreationTimestamp="2026-02-23 18:44:38 +0000 UTC" firstStartedPulling="2026-02-23 18:44:39.137951327 +0000 UTC m=+674.528437127" lastFinishedPulling="2026-02-23 18:44:42.072599609 +0000 UTC m=+677.463085419" observedRunningTime="2026-02-23 18:44:43.240762978 +0000 UTC m=+678.631248808" watchObservedRunningTime="2026-02-23 18:44:43.242024702 +0000 UTC m=+678.632510512" Feb 23 18:44:45 crc kubenswrapper[4768]: I0223 18:44:45.208691 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7sgf5" event={"ID":"405a4831-883b-4d37-9b41-50b60a1268bf","Type":"ContainerStarted","Data":"3d8cd99651cbdb0e30e5e73358c7e8fa496b2dfd08dc13e769b26027cd86da16"} Feb 23 18:44:45 crc kubenswrapper[4768]: I0223 18:44:45.240677 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-7sgf5" podStartSLOduration=1.8078016749999999 podStartE2EDuration="7.240654162s" podCreationTimestamp="2026-02-23 18:44:38 +0000 UTC" firstStartedPulling="2026-02-23 18:44:39.087411211 +0000 UTC m=+674.477897011" lastFinishedPulling="2026-02-23 18:44:44.520263698 +0000 UTC m=+679.910749498" observedRunningTime="2026-02-23 18:44:45.232267344 +0000 UTC m=+680.622753154" watchObservedRunningTime="2026-02-23 18:44:45.240654162 +0000 UTC m=+680.631139962" Feb 23 18:44:48 crc kubenswrapper[4768]: I0223 18:44:48.840326 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-sq2t7" Feb 23 18:44:49 crc kubenswrapper[4768]: I0223 18:44:49.183311 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:49 crc kubenswrapper[4768]: I0223 18:44:49.183397 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:49 crc kubenswrapper[4768]: I0223 18:44:49.191517 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:49 crc kubenswrapper[4768]: I0223 18:44:49.244939 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-58dbf884bb-kbxv7" Feb 23 18:44:49 crc kubenswrapper[4768]: I0223 18:44:49.328956 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-v9856"] Feb 23 18:44:58 crc kubenswrapper[4768]: I0223 18:44:58.781411 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-w8cwv" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.167346 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp"] Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.168182 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.170839 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.171059 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.200401 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp"] Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.220425 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vrff\" (UniqueName: \"kubernetes.io/projected/b830f829-652e-448e-9a7b-ec0c1d91cee9-kube-api-access-4vrff\") pod \"collect-profiles-29531205-w5jjp\" (UID: \"b830f829-652e-448e-9a7b-ec0c1d91cee9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.220526 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b830f829-652e-448e-9a7b-ec0c1d91cee9-config-volume\") pod \"collect-profiles-29531205-w5jjp\" (UID: \"b830f829-652e-448e-9a7b-ec0c1d91cee9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.220578 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b830f829-652e-448e-9a7b-ec0c1d91cee9-secret-volume\") pod \"collect-profiles-29531205-w5jjp\" (UID: \"b830f829-652e-448e-9a7b-ec0c1d91cee9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.321659 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b830f829-652e-448e-9a7b-ec0c1d91cee9-config-volume\") pod \"collect-profiles-29531205-w5jjp\" (UID: \"b830f829-652e-448e-9a7b-ec0c1d91cee9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.321750 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b830f829-652e-448e-9a7b-ec0c1d91cee9-secret-volume\") pod \"collect-profiles-29531205-w5jjp\" (UID: \"b830f829-652e-448e-9a7b-ec0c1d91cee9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.321800 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vrff\" (UniqueName: \"kubernetes.io/projected/b830f829-652e-448e-9a7b-ec0c1d91cee9-kube-api-access-4vrff\") pod \"collect-profiles-29531205-w5jjp\" (UID: \"b830f829-652e-448e-9a7b-ec0c1d91cee9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.322967 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b830f829-652e-448e-9a7b-ec0c1d91cee9-config-volume\") pod \"collect-profiles-29531205-w5jjp\" (UID: \"b830f829-652e-448e-9a7b-ec0c1d91cee9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.335120 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b830f829-652e-448e-9a7b-ec0c1d91cee9-secret-volume\") pod \"collect-profiles-29531205-w5jjp\" (UID: \"b830f829-652e-448e-9a7b-ec0c1d91cee9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.336889 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vrff\" (UniqueName: \"kubernetes.io/projected/b830f829-652e-448e-9a7b-ec0c1d91cee9-kube-api-access-4vrff\") pod \"collect-profiles-29531205-w5jjp\" (UID: \"b830f829-652e-448e-9a7b-ec0c1d91cee9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.494616 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" Feb 23 18:45:00 crc kubenswrapper[4768]: I0223 18:45:00.712170 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp"] Feb 23 18:45:01 crc kubenswrapper[4768]: I0223 18:45:01.325889 4768 generic.go:334] "Generic (PLEG): container finished" podID="b830f829-652e-448e-9a7b-ec0c1d91cee9" containerID="99c2fab5191623685bdc0925142b73d07eef1849ce06d1ac85bab4b40e542e44" exitCode=0 Feb 23 18:45:01 crc kubenswrapper[4768]: I0223 18:45:01.325979 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" event={"ID":"b830f829-652e-448e-9a7b-ec0c1d91cee9","Type":"ContainerDied","Data":"99c2fab5191623685bdc0925142b73d07eef1849ce06d1ac85bab4b40e542e44"} Feb 23 18:45:01 crc kubenswrapper[4768]: I0223 18:45:01.326035 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" event={"ID":"b830f829-652e-448e-9a7b-ec0c1d91cee9","Type":"ContainerStarted","Data":"fca20f1fa519d84c91f32a5564472d6b515a395df235ac050d63789e079368eb"} Feb 23 18:45:02 crc kubenswrapper[4768]: I0223 18:45:02.615993 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" Feb 23 18:45:02 crc kubenswrapper[4768]: I0223 18:45:02.665781 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b830f829-652e-448e-9a7b-ec0c1d91cee9-secret-volume\") pod \"b830f829-652e-448e-9a7b-ec0c1d91cee9\" (UID: \"b830f829-652e-448e-9a7b-ec0c1d91cee9\") " Feb 23 18:45:02 crc kubenswrapper[4768]: I0223 18:45:02.665936 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b830f829-652e-448e-9a7b-ec0c1d91cee9-config-volume\") pod \"b830f829-652e-448e-9a7b-ec0c1d91cee9\" (UID: \"b830f829-652e-448e-9a7b-ec0c1d91cee9\") " Feb 23 18:45:02 crc kubenswrapper[4768]: I0223 18:45:02.666031 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vrff\" (UniqueName: \"kubernetes.io/projected/b830f829-652e-448e-9a7b-ec0c1d91cee9-kube-api-access-4vrff\") pod \"b830f829-652e-448e-9a7b-ec0c1d91cee9\" (UID: \"b830f829-652e-448e-9a7b-ec0c1d91cee9\") " Feb 23 18:45:02 crc kubenswrapper[4768]: I0223 18:45:02.667513 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b830f829-652e-448e-9a7b-ec0c1d91cee9-config-volume" (OuterVolumeSpecName: "config-volume") pod "b830f829-652e-448e-9a7b-ec0c1d91cee9" (UID: "b830f829-652e-448e-9a7b-ec0c1d91cee9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:45:02 crc kubenswrapper[4768]: I0223 18:45:02.711463 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b830f829-652e-448e-9a7b-ec0c1d91cee9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b830f829-652e-448e-9a7b-ec0c1d91cee9" (UID: "b830f829-652e-448e-9a7b-ec0c1d91cee9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:45:02 crc kubenswrapper[4768]: I0223 18:45:02.712453 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b830f829-652e-448e-9a7b-ec0c1d91cee9-kube-api-access-4vrff" (OuterVolumeSpecName: "kube-api-access-4vrff") pod "b830f829-652e-448e-9a7b-ec0c1d91cee9" (UID: "b830f829-652e-448e-9a7b-ec0c1d91cee9"). InnerVolumeSpecName "kube-api-access-4vrff". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:45:02 crc kubenswrapper[4768]: I0223 18:45:02.767539 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b830f829-652e-448e-9a7b-ec0c1d91cee9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:02 crc kubenswrapper[4768]: I0223 18:45:02.767603 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vrff\" (UniqueName: \"kubernetes.io/projected/b830f829-652e-448e-9a7b-ec0c1d91cee9-kube-api-access-4vrff\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:02 crc kubenswrapper[4768]: I0223 18:45:02.767629 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b830f829-652e-448e-9a7b-ec0c1d91cee9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:03 crc kubenswrapper[4768]: I0223 18:45:03.342290 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" event={"ID":"b830f829-652e-448e-9a7b-ec0c1d91cee9","Type":"ContainerDied","Data":"fca20f1fa519d84c91f32a5564472d6b515a395df235ac050d63789e079368eb"} Feb 23 18:45:03 crc kubenswrapper[4768]: I0223 18:45:03.342666 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca20f1fa519d84c91f32a5564472d6b515a395df235ac050d63789e079368eb" Feb 23 18:45:03 crc kubenswrapper[4768]: I0223 18:45:03.342398 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.349924 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc"] Feb 23 18:45:13 crc kubenswrapper[4768]: E0223 18:45:13.350802 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b830f829-652e-448e-9a7b-ec0c1d91cee9" containerName="collect-profiles" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.350819 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b830f829-652e-448e-9a7b-ec0c1d91cee9" containerName="collect-profiles" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.350993 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b830f829-652e-448e-9a7b-ec0c1d91cee9" containerName="collect-profiles" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.351977 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.353574 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.362764 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc"] Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.446393 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nksx6\" (UniqueName: \"kubernetes.io/projected/0b6937d2-6789-4b4e-bb7c-a298b8e23168-kube-api-access-nksx6\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc\" (UID: \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.446437 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b6937d2-6789-4b4e-bb7c-a298b8e23168-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc\" (UID: \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.446466 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b6937d2-6789-4b4e-bb7c-a298b8e23168-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc\" (UID: \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.548310 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nksx6\" (UniqueName: \"kubernetes.io/projected/0b6937d2-6789-4b4e-bb7c-a298b8e23168-kube-api-access-nksx6\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc\" (UID: \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.548360 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b6937d2-6789-4b4e-bb7c-a298b8e23168-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc\" (UID: \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.548387 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b6937d2-6789-4b4e-bb7c-a298b8e23168-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc\" (UID: \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.548865 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b6937d2-6789-4b4e-bb7c-a298b8e23168-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc\" (UID: \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.549059 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b6937d2-6789-4b4e-bb7c-a298b8e23168-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc\" (UID: \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.577128 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nksx6\" (UniqueName: \"kubernetes.io/projected/0b6937d2-6789-4b4e-bb7c-a298b8e23168-kube-api-access-nksx6\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc\" (UID: \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.672200 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" Feb 23 18:45:13 crc kubenswrapper[4768]: I0223 18:45:13.916501 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc"] Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.392348 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-v9856" podUID="c269d15e-90d0-47d8-b2bd-f5785fa1a69b" containerName="console" containerID="cri-o://626a4af4b4b514646c35bb59aa93c9081dcc809fe60528ce1576d27da6161861" gracePeriod=15 Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.418782 4768 generic.go:334] "Generic (PLEG): container finished" podID="0b6937d2-6789-4b4e-bb7c-a298b8e23168" containerID="4b023726ec1eeb45bbd0f76f28fce1fc7b0c61b81ce01d7fd84d4de6b541321f" exitCode=0 Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.418846 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" event={"ID":"0b6937d2-6789-4b4e-bb7c-a298b8e23168","Type":"ContainerDied","Data":"4b023726ec1eeb45bbd0f76f28fce1fc7b0c61b81ce01d7fd84d4de6b541321f"} Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.418901 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" event={"ID":"0b6937d2-6789-4b4e-bb7c-a298b8e23168","Type":"ContainerStarted","Data":"3eb890eb55132fba7579700354f07dd03c5511c9c6e45fcdf82b5606edd53333"} Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.895740 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-v9856_c269d15e-90d0-47d8-b2bd-f5785fa1a69b/console/0.log" Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.896472 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.984865 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-oauth-serving-cert\") pod \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.984920 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-trusted-ca-bundle\") pod \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.985009 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcdc7\" (UniqueName: \"kubernetes.io/projected/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-kube-api-access-dcdc7\") pod \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.985057 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-service-ca\") pod \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.985088 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-serving-cert\") pod \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.985132 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-config\") pod \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.985227 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-oauth-config\") pod \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\" (UID: \"c269d15e-90d0-47d8-b2bd-f5785fa1a69b\") " Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.986297 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-config" (OuterVolumeSpecName: "console-config") pod "c269d15e-90d0-47d8-b2bd-f5785fa1a69b" (UID: "c269d15e-90d0-47d8-b2bd-f5785fa1a69b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.986327 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-service-ca" (OuterVolumeSpecName: "service-ca") pod "c269d15e-90d0-47d8-b2bd-f5785fa1a69b" (UID: "c269d15e-90d0-47d8-b2bd-f5785fa1a69b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.986412 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c269d15e-90d0-47d8-b2bd-f5785fa1a69b" (UID: "c269d15e-90d0-47d8-b2bd-f5785fa1a69b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.987088 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c269d15e-90d0-47d8-b2bd-f5785fa1a69b" (UID: "c269d15e-90d0-47d8-b2bd-f5785fa1a69b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.993775 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-kube-api-access-dcdc7" (OuterVolumeSpecName: "kube-api-access-dcdc7") pod "c269d15e-90d0-47d8-b2bd-f5785fa1a69b" (UID: "c269d15e-90d0-47d8-b2bd-f5785fa1a69b"). InnerVolumeSpecName "kube-api-access-dcdc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.993915 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c269d15e-90d0-47d8-b2bd-f5785fa1a69b" (UID: "c269d15e-90d0-47d8-b2bd-f5785fa1a69b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:45:14 crc kubenswrapper[4768]: I0223 18:45:14.994361 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c269d15e-90d0-47d8-b2bd-f5785fa1a69b" (UID: "c269d15e-90d0-47d8-b2bd-f5785fa1a69b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.087317 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcdc7\" (UniqueName: \"kubernetes.io/projected/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-kube-api-access-dcdc7\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.087374 4768 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-service-ca\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.087404 4768 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.087429 4768 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.087455 4768 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.087477 4768 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.087502 4768 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c269d15e-90d0-47d8-b2bd-f5785fa1a69b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.433709 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-v9856_c269d15e-90d0-47d8-b2bd-f5785fa1a69b/console/0.log" Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.433774 4768 generic.go:334] "Generic (PLEG): container finished" podID="c269d15e-90d0-47d8-b2bd-f5785fa1a69b" containerID="626a4af4b4b514646c35bb59aa93c9081dcc809fe60528ce1576d27da6161861" exitCode=2 Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.433818 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v9856" event={"ID":"c269d15e-90d0-47d8-b2bd-f5785fa1a69b","Type":"ContainerDied","Data":"626a4af4b4b514646c35bb59aa93c9081dcc809fe60528ce1576d27da6161861"} Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.433851 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v9856" event={"ID":"c269d15e-90d0-47d8-b2bd-f5785fa1a69b","Type":"ContainerDied","Data":"be51b2fcf4f2dccb28b14e4b0aa8e73f50539866b1cc0a76753f64d0fb41c086"} Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.433889 4768 scope.go:117] "RemoveContainer" containerID="626a4af4b4b514646c35bb59aa93c9081dcc809fe60528ce1576d27da6161861" Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.434126 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v9856" Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.457623 4768 scope.go:117] "RemoveContainer" containerID="626a4af4b4b514646c35bb59aa93c9081dcc809fe60528ce1576d27da6161861" Feb 23 18:45:15 crc kubenswrapper[4768]: E0223 18:45:15.458170 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"626a4af4b4b514646c35bb59aa93c9081dcc809fe60528ce1576d27da6161861\": container with ID starting with 626a4af4b4b514646c35bb59aa93c9081dcc809fe60528ce1576d27da6161861 not found: ID does not exist" containerID="626a4af4b4b514646c35bb59aa93c9081dcc809fe60528ce1576d27da6161861" Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.458207 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"626a4af4b4b514646c35bb59aa93c9081dcc809fe60528ce1576d27da6161861"} err="failed to get container status \"626a4af4b4b514646c35bb59aa93c9081dcc809fe60528ce1576d27da6161861\": rpc error: code = NotFound desc = could not find container \"626a4af4b4b514646c35bb59aa93c9081dcc809fe60528ce1576d27da6161861\": container with ID starting with 626a4af4b4b514646c35bb59aa93c9081dcc809fe60528ce1576d27da6161861 not found: ID does not exist" Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.466513 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-v9856"] Feb 23 18:45:15 crc kubenswrapper[4768]: I0223 18:45:15.476081 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-v9856"] Feb 23 18:45:16 crc kubenswrapper[4768]: I0223 18:45:16.442204 4768 generic.go:334] "Generic (PLEG): container finished" podID="0b6937d2-6789-4b4e-bb7c-a298b8e23168" containerID="50d3277c1aae643416d1a8c0e2eed7317484e02220903775722bb5011407b99d" exitCode=0 Feb 23 18:45:16 crc kubenswrapper[4768]: I0223 18:45:16.442278 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" event={"ID":"0b6937d2-6789-4b4e-bb7c-a298b8e23168","Type":"ContainerDied","Data":"50d3277c1aae643416d1a8c0e2eed7317484e02220903775722bb5011407b99d"} Feb 23 18:45:17 crc kubenswrapper[4768]: I0223 18:45:17.320047 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c269d15e-90d0-47d8-b2bd-f5785fa1a69b" path="/var/lib/kubelet/pods/c269d15e-90d0-47d8-b2bd-f5785fa1a69b/volumes" Feb 23 18:45:17 crc kubenswrapper[4768]: I0223 18:45:17.457338 4768 generic.go:334] "Generic (PLEG): container finished" podID="0b6937d2-6789-4b4e-bb7c-a298b8e23168" containerID="fc002759a524a4cfabe76d175be171fd3022d60b60ee8a140227ba988adbd5cf" exitCode=0 Feb 23 18:45:17 crc kubenswrapper[4768]: I0223 18:45:17.457418 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" event={"ID":"0b6937d2-6789-4b4e-bb7c-a298b8e23168","Type":"ContainerDied","Data":"fc002759a524a4cfabe76d175be171fd3022d60b60ee8a140227ba988adbd5cf"} Feb 23 18:45:18 crc kubenswrapper[4768]: I0223 18:45:18.726962 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" Feb 23 18:45:18 crc kubenswrapper[4768]: I0223 18:45:18.857413 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b6937d2-6789-4b4e-bb7c-a298b8e23168-bundle\") pod \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\" (UID: \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\") " Feb 23 18:45:18 crc kubenswrapper[4768]: I0223 18:45:18.857508 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b6937d2-6789-4b4e-bb7c-a298b8e23168-util\") pod \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\" (UID: \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\") " Feb 23 18:45:18 crc kubenswrapper[4768]: I0223 18:45:18.857553 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nksx6\" (UniqueName: \"kubernetes.io/projected/0b6937d2-6789-4b4e-bb7c-a298b8e23168-kube-api-access-nksx6\") pod \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\" (UID: \"0b6937d2-6789-4b4e-bb7c-a298b8e23168\") " Feb 23 18:45:18 crc kubenswrapper[4768]: I0223 18:45:18.860022 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b6937d2-6789-4b4e-bb7c-a298b8e23168-bundle" (OuterVolumeSpecName: "bundle") pod "0b6937d2-6789-4b4e-bb7c-a298b8e23168" (UID: "0b6937d2-6789-4b4e-bb7c-a298b8e23168"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:45:18 crc kubenswrapper[4768]: I0223 18:45:18.867663 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6937d2-6789-4b4e-bb7c-a298b8e23168-kube-api-access-nksx6" (OuterVolumeSpecName: "kube-api-access-nksx6") pod "0b6937d2-6789-4b4e-bb7c-a298b8e23168" (UID: "0b6937d2-6789-4b4e-bb7c-a298b8e23168"). InnerVolumeSpecName "kube-api-access-nksx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:45:18 crc kubenswrapper[4768]: I0223 18:45:18.872327 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b6937d2-6789-4b4e-bb7c-a298b8e23168-util" (OuterVolumeSpecName: "util") pod "0b6937d2-6789-4b4e-bb7c-a298b8e23168" (UID: "0b6937d2-6789-4b4e-bb7c-a298b8e23168"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:45:18 crc kubenswrapper[4768]: I0223 18:45:18.959070 4768 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0b6937d2-6789-4b4e-bb7c-a298b8e23168-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:18 crc kubenswrapper[4768]: I0223 18:45:18.959129 4768 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0b6937d2-6789-4b4e-bb7c-a298b8e23168-util\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:18 crc kubenswrapper[4768]: I0223 18:45:18.959154 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nksx6\" (UniqueName: \"kubernetes.io/projected/0b6937d2-6789-4b4e-bb7c-a298b8e23168-kube-api-access-nksx6\") on node \"crc\" DevicePath \"\"" Feb 23 18:45:19 crc kubenswrapper[4768]: I0223 18:45:19.478289 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" event={"ID":"0b6937d2-6789-4b4e-bb7c-a298b8e23168","Type":"ContainerDied","Data":"3eb890eb55132fba7579700354f07dd03c5511c9c6e45fcdf82b5606edd53333"} Feb 23 18:45:19 crc kubenswrapper[4768]: I0223 18:45:19.478345 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3eb890eb55132fba7579700354f07dd03c5511c9c6e45fcdf82b5606edd53333" Feb 23 18:45:19 crc kubenswrapper[4768]: I0223 18:45:19.478475 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc" Feb 23 18:45:25 crc kubenswrapper[4768]: I0223 18:45:25.987237 4768 scope.go:117] "RemoveContainer" containerID="14edf37676fb9add48bca8117c63728b79ac542c1691fe738ac292dddedb655c" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.345376 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-655544f676-lzj52"] Feb 23 18:45:27 crc kubenswrapper[4768]: E0223 18:45:27.345648 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c269d15e-90d0-47d8-b2bd-f5785fa1a69b" containerName="console" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.345667 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c269d15e-90d0-47d8-b2bd-f5785fa1a69b" containerName="console" Feb 23 18:45:27 crc kubenswrapper[4768]: E0223 18:45:27.345680 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6937d2-6789-4b4e-bb7c-a298b8e23168" containerName="util" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.345687 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6937d2-6789-4b4e-bb7c-a298b8e23168" containerName="util" Feb 23 18:45:27 crc kubenswrapper[4768]: E0223 18:45:27.345703 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6937d2-6789-4b4e-bb7c-a298b8e23168" containerName="extract" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.345710 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6937d2-6789-4b4e-bb7c-a298b8e23168" containerName="extract" Feb 23 18:45:27 crc kubenswrapper[4768]: E0223 18:45:27.345730 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6937d2-6789-4b4e-bb7c-a298b8e23168" containerName="pull" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.345737 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6937d2-6789-4b4e-bb7c-a298b8e23168" containerName="pull" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.345839 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c269d15e-90d0-47d8-b2bd-f5785fa1a69b" containerName="console" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.345849 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b6937d2-6789-4b4e-bb7c-a298b8e23168" containerName="extract" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.346219 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.348293 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.348507 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.353463 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.353834 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.354027 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-db28q" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.358880 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-655544f676-lzj52"] Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.474560 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e250524b-d6cd-444e-9e6b-3a2a5387d3b2-webhook-cert\") pod \"metallb-operator-controller-manager-655544f676-lzj52\" (UID: \"e250524b-d6cd-444e-9e6b-3a2a5387d3b2\") " pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.474822 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e250524b-d6cd-444e-9e6b-3a2a5387d3b2-apiservice-cert\") pod \"metallb-operator-controller-manager-655544f676-lzj52\" (UID: \"e250524b-d6cd-444e-9e6b-3a2a5387d3b2\") " pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.474955 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhwz5\" (UniqueName: \"kubernetes.io/projected/e250524b-d6cd-444e-9e6b-3a2a5387d3b2-kube-api-access-rhwz5\") pod \"metallb-operator-controller-manager-655544f676-lzj52\" (UID: \"e250524b-d6cd-444e-9e6b-3a2a5387d3b2\") " pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.576225 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e250524b-d6cd-444e-9e6b-3a2a5387d3b2-apiservice-cert\") pod \"metallb-operator-controller-manager-655544f676-lzj52\" (UID: \"e250524b-d6cd-444e-9e6b-3a2a5387d3b2\") " pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.576294 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhwz5\" (UniqueName: \"kubernetes.io/projected/e250524b-d6cd-444e-9e6b-3a2a5387d3b2-kube-api-access-rhwz5\") pod \"metallb-operator-controller-manager-655544f676-lzj52\" (UID: \"e250524b-d6cd-444e-9e6b-3a2a5387d3b2\") " pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.576337 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e250524b-d6cd-444e-9e6b-3a2a5387d3b2-webhook-cert\") pod \"metallb-operator-controller-manager-655544f676-lzj52\" (UID: \"e250524b-d6cd-444e-9e6b-3a2a5387d3b2\") " pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.585121 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e250524b-d6cd-444e-9e6b-3a2a5387d3b2-apiservice-cert\") pod \"metallb-operator-controller-manager-655544f676-lzj52\" (UID: \"e250524b-d6cd-444e-9e6b-3a2a5387d3b2\") " pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.600079 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e250524b-d6cd-444e-9e6b-3a2a5387d3b2-webhook-cert\") pod \"metallb-operator-controller-manager-655544f676-lzj52\" (UID: \"e250524b-d6cd-444e-9e6b-3a2a5387d3b2\") " pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.605344 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhwz5\" (UniqueName: \"kubernetes.io/projected/e250524b-d6cd-444e-9e6b-3a2a5387d3b2-kube-api-access-rhwz5\") pod \"metallb-operator-controller-manager-655544f676-lzj52\" (UID: \"e250524b-d6cd-444e-9e6b-3a2a5387d3b2\") " pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.661621 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.704116 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j"] Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.704846 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.707263 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.707483 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-v48dj" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.707647 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.726493 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j"] Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.778902 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c32327a-6231-46a7-9d4b-e0ef86979632-apiservice-cert\") pod \"metallb-operator-webhook-server-5cf8d9bdbb-l2w9j\" (UID: \"8c32327a-6231-46a7-9d4b-e0ef86979632\") " pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.779160 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c32327a-6231-46a7-9d4b-e0ef86979632-webhook-cert\") pod \"metallb-operator-webhook-server-5cf8d9bdbb-l2w9j\" (UID: \"8c32327a-6231-46a7-9d4b-e0ef86979632\") " pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.779240 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf5vk\" (UniqueName: \"kubernetes.io/projected/8c32327a-6231-46a7-9d4b-e0ef86979632-kube-api-access-xf5vk\") pod \"metallb-operator-webhook-server-5cf8d9bdbb-l2w9j\" (UID: \"8c32327a-6231-46a7-9d4b-e0ef86979632\") " pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.881114 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c32327a-6231-46a7-9d4b-e0ef86979632-apiservice-cert\") pod \"metallb-operator-webhook-server-5cf8d9bdbb-l2w9j\" (UID: \"8c32327a-6231-46a7-9d4b-e0ef86979632\") " pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.881223 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c32327a-6231-46a7-9d4b-e0ef86979632-webhook-cert\") pod \"metallb-operator-webhook-server-5cf8d9bdbb-l2w9j\" (UID: \"8c32327a-6231-46a7-9d4b-e0ef86979632\") " pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.881296 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf5vk\" (UniqueName: \"kubernetes.io/projected/8c32327a-6231-46a7-9d4b-e0ef86979632-kube-api-access-xf5vk\") pod \"metallb-operator-webhook-server-5cf8d9bdbb-l2w9j\" (UID: \"8c32327a-6231-46a7-9d4b-e0ef86979632\") " pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.884867 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8c32327a-6231-46a7-9d4b-e0ef86979632-webhook-cert\") pod \"metallb-operator-webhook-server-5cf8d9bdbb-l2w9j\" (UID: \"8c32327a-6231-46a7-9d4b-e0ef86979632\") " pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.885593 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c32327a-6231-46a7-9d4b-e0ef86979632-apiservice-cert\") pod \"metallb-operator-webhook-server-5cf8d9bdbb-l2w9j\" (UID: \"8c32327a-6231-46a7-9d4b-e0ef86979632\") " pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" Feb 23 18:45:27 crc kubenswrapper[4768]: I0223 18:45:27.901416 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf5vk\" (UniqueName: \"kubernetes.io/projected/8c32327a-6231-46a7-9d4b-e0ef86979632-kube-api-access-xf5vk\") pod \"metallb-operator-webhook-server-5cf8d9bdbb-l2w9j\" (UID: \"8c32327a-6231-46a7-9d4b-e0ef86979632\") " pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" Feb 23 18:45:28 crc kubenswrapper[4768]: I0223 18:45:28.024466 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" Feb 23 18:45:28 crc kubenswrapper[4768]: I0223 18:45:28.150715 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-655544f676-lzj52"] Feb 23 18:45:28 crc kubenswrapper[4768]: W0223 18:45:28.182732 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode250524b_d6cd_444e_9e6b_3a2a5387d3b2.slice/crio-52a3a7f34a36d807fd8c30ca1799d3d9bfaeed7644c54b585fd143af481e68cd WatchSource:0}: Error finding container 52a3a7f34a36d807fd8c30ca1799d3d9bfaeed7644c54b585fd143af481e68cd: Status 404 returned error can't find the container with id 52a3a7f34a36d807fd8c30ca1799d3d9bfaeed7644c54b585fd143af481e68cd Feb 23 18:45:28 crc kubenswrapper[4768]: I0223 18:45:28.297226 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j"] Feb 23 18:45:28 crc kubenswrapper[4768]: I0223 18:45:28.534553 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" event={"ID":"8c32327a-6231-46a7-9d4b-e0ef86979632","Type":"ContainerStarted","Data":"bc14f045c4e8f8134141109aacf9a06efd73710bba67bf44de0fa44aa3a7f990"} Feb 23 18:45:28 crc kubenswrapper[4768]: I0223 18:45:28.538051 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" event={"ID":"e250524b-d6cd-444e-9e6b-3a2a5387d3b2","Type":"ContainerStarted","Data":"52a3a7f34a36d807fd8c30ca1799d3d9bfaeed7644c54b585fd143af481e68cd"} Feb 23 18:45:33 crc kubenswrapper[4768]: I0223 18:45:33.586802 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" event={"ID":"e250524b-d6cd-444e-9e6b-3a2a5387d3b2","Type":"ContainerStarted","Data":"ddcce30c2088b772aeb540716ce15b700988cc0c763eb9ad58b04811a7992909"} Feb 23 18:45:33 crc kubenswrapper[4768]: I0223 18:45:33.587842 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" Feb 23 18:45:33 crc kubenswrapper[4768]: I0223 18:45:33.590729 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" event={"ID":"8c32327a-6231-46a7-9d4b-e0ef86979632","Type":"ContainerStarted","Data":"f9bc3a89f9884660efd7fcd4aca7f6f1a54e0a902bf451cbc4c636ee9e2edc1e"} Feb 23 18:45:33 crc kubenswrapper[4768]: I0223 18:45:33.591414 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" Feb 23 18:45:33 crc kubenswrapper[4768]: I0223 18:45:33.614893 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" podStartSLOduration=1.8453761210000001 podStartE2EDuration="6.614860649s" podCreationTimestamp="2026-02-23 18:45:27 +0000 UTC" firstStartedPulling="2026-02-23 18:45:28.185732975 +0000 UTC m=+723.576218775" lastFinishedPulling="2026-02-23 18:45:32.955217503 +0000 UTC m=+728.345703303" observedRunningTime="2026-02-23 18:45:33.611881537 +0000 UTC m=+729.002367347" watchObservedRunningTime="2026-02-23 18:45:33.614860649 +0000 UTC m=+729.005346489" Feb 23 18:45:33 crc kubenswrapper[4768]: I0223 18:45:33.647332 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" podStartSLOduration=1.988560672 podStartE2EDuration="6.647301559s" podCreationTimestamp="2026-02-23 18:45:27 +0000 UTC" firstStartedPulling="2026-02-23 18:45:28.310633525 +0000 UTC m=+723.701119335" lastFinishedPulling="2026-02-23 18:45:32.969374422 +0000 UTC m=+728.359860222" observedRunningTime="2026-02-23 18:45:33.641598403 +0000 UTC m=+729.032084203" watchObservedRunningTime="2026-02-23 18:45:33.647301559 +0000 UTC m=+729.037787399" Feb 23 18:45:48 crc kubenswrapper[4768]: I0223 18:45:48.029906 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5cf8d9bdbb-l2w9j" Feb 23 18:46:07 crc kubenswrapper[4768]: I0223 18:46:07.674967 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-655544f676-lzj52" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.515562 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-lhjzv"] Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.517715 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.520265 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.520543 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.520538 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-x4kcm" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.544297 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm"] Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.545222 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.550130 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.561098 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm"] Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.598966 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2c2223f2-8fac-4021-b096-4087bac80ab0-frr-sockets\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.599064 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qshnw\" (UniqueName: \"kubernetes.io/projected/2c2223f2-8fac-4021-b096-4087bac80ab0-kube-api-access-qshnw\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.599117 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c2223f2-8fac-4021-b096-4087bac80ab0-metrics-certs\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.599157 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2c2223f2-8fac-4021-b096-4087bac80ab0-reloader\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.599182 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2c2223f2-8fac-4021-b096-4087bac80ab0-frr-startup\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.599293 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2c2223f2-8fac-4021-b096-4087bac80ab0-frr-conf\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.599335 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2c2223f2-8fac-4021-b096-4087bac80ab0-metrics\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.622759 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-knv9f"] Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.623673 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-knv9f" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.628442 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.628507 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-jjsl7" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.628627 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.628925 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.638146 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-8snqf"] Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.639276 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-8snqf" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.644957 4768 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.666868 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-8snqf"] Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.700458 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-memberlist\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.700511 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwlxw\" (UniqueName: \"kubernetes.io/projected/06a269f1-e448-49da-b22d-7ef6bcfe31e1-kube-api-access-lwlxw\") pod \"frr-k8s-webhook-server-78b44bf5bb-sglwm\" (UID: \"06a269f1-e448-49da-b22d-7ef6bcfe31e1\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.700541 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2c2223f2-8fac-4021-b096-4087bac80ab0-frr-sockets\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.700749 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qshnw\" (UniqueName: \"kubernetes.io/projected/2c2223f2-8fac-4021-b096-4087bac80ab0-kube-api-access-qshnw\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.700828 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c2223f2-8fac-4021-b096-4087bac80ab0-metrics-certs\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.700883 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2c2223f2-8fac-4021-b096-4087bac80ab0-reloader\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.700912 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tq4b\" (UniqueName: \"kubernetes.io/projected/bc147539-1205-4a1f-82d6-ca40f47d37d0-kube-api-access-8tq4b\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.700941 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2c2223f2-8fac-4021-b096-4087bac80ab0-frr-startup\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.700885 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2c2223f2-8fac-4021-b096-4087bac80ab0-frr-sockets\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.701103 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2c2223f2-8fac-4021-b096-4087bac80ab0-frr-conf\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.701168 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06a269f1-e448-49da-b22d-7ef6bcfe31e1-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-sglwm\" (UID: \"06a269f1-e448-49da-b22d-7ef6bcfe31e1\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.701197 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-metrics-certs\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.701266 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2c2223f2-8fac-4021-b096-4087bac80ab0-metrics\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.701309 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bc147539-1205-4a1f-82d6-ca40f47d37d0-metallb-excludel2\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.701323 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2c2223f2-8fac-4021-b096-4087bac80ab0-reloader\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.701471 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2c2223f2-8fac-4021-b096-4087bac80ab0-frr-conf\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.701541 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2c2223f2-8fac-4021-b096-4087bac80ab0-metrics\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.702078 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2c2223f2-8fac-4021-b096-4087bac80ab0-frr-startup\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.706861 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c2223f2-8fac-4021-b096-4087bac80ab0-metrics-certs\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.722858 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qshnw\" (UniqueName: \"kubernetes.io/projected/2c2223f2-8fac-4021-b096-4087bac80ab0-kube-api-access-qshnw\") pod \"frr-k8s-lhjzv\" (UID: \"2c2223f2-8fac-4021-b096-4087bac80ab0\") " pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.802749 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-memberlist\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.802809 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwlxw\" (UniqueName: \"kubernetes.io/projected/06a269f1-e448-49da-b22d-7ef6bcfe31e1-kube-api-access-lwlxw\") pod \"frr-k8s-webhook-server-78b44bf5bb-sglwm\" (UID: \"06a269f1-e448-49da-b22d-7ef6bcfe31e1\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.802895 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz7pq\" (UniqueName: \"kubernetes.io/projected/a02480fd-a2d6-4364-b83f-e01dfa5a6676-kube-api-access-nz7pq\") pod \"controller-69bbfbf88f-8snqf\" (UID: \"a02480fd-a2d6-4364-b83f-e01dfa5a6676\") " pod="metallb-system/controller-69bbfbf88f-8snqf" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.802930 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tq4b\" (UniqueName: \"kubernetes.io/projected/bc147539-1205-4a1f-82d6-ca40f47d37d0-kube-api-access-8tq4b\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:08 crc kubenswrapper[4768]: E0223 18:46:08.802963 4768 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.802982 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a02480fd-a2d6-4364-b83f-e01dfa5a6676-cert\") pod \"controller-69bbfbf88f-8snqf\" (UID: \"a02480fd-a2d6-4364-b83f-e01dfa5a6676\") " pod="metallb-system/controller-69bbfbf88f-8snqf" Feb 23 18:46:08 crc kubenswrapper[4768]: E0223 18:46:08.803075 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-memberlist podName:bc147539-1205-4a1f-82d6-ca40f47d37d0 nodeName:}" failed. No retries permitted until 2026-02-23 18:46:09.303045244 +0000 UTC m=+764.693531134 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-memberlist") pod "speaker-knv9f" (UID: "bc147539-1205-4a1f-82d6-ca40f47d37d0") : secret "metallb-memberlist" not found Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.803168 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06a269f1-e448-49da-b22d-7ef6bcfe31e1-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-sglwm\" (UID: \"06a269f1-e448-49da-b22d-7ef6bcfe31e1\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.803211 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-metrics-certs\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.803237 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a02480fd-a2d6-4364-b83f-e01dfa5a6676-metrics-certs\") pod \"controller-69bbfbf88f-8snqf\" (UID: \"a02480fd-a2d6-4364-b83f-e01dfa5a6676\") " pod="metallb-system/controller-69bbfbf88f-8snqf" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.803320 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bc147539-1205-4a1f-82d6-ca40f47d37d0-metallb-excludel2\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:08 crc kubenswrapper[4768]: E0223 18:46:08.803455 4768 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Feb 23 18:46:08 crc kubenswrapper[4768]: E0223 18:46:08.803532 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-metrics-certs podName:bc147539-1205-4a1f-82d6-ca40f47d37d0 nodeName:}" failed. No retries permitted until 2026-02-23 18:46:09.303505436 +0000 UTC m=+764.693991306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-metrics-certs") pod "speaker-knv9f" (UID: "bc147539-1205-4a1f-82d6-ca40f47d37d0") : secret "speaker-certs-secret" not found Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.804115 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bc147539-1205-4a1f-82d6-ca40f47d37d0-metallb-excludel2\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.811059 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/06a269f1-e448-49da-b22d-7ef6bcfe31e1-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-sglwm\" (UID: \"06a269f1-e448-49da-b22d-7ef6bcfe31e1\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.833665 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwlxw\" (UniqueName: \"kubernetes.io/projected/06a269f1-e448-49da-b22d-7ef6bcfe31e1-kube-api-access-lwlxw\") pod \"frr-k8s-webhook-server-78b44bf5bb-sglwm\" (UID: \"06a269f1-e448-49da-b22d-7ef6bcfe31e1\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.838086 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tq4b\" (UniqueName: \"kubernetes.io/projected/bc147539-1205-4a1f-82d6-ca40f47d37d0-kube-api-access-8tq4b\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.839952 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.860815 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.904682 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz7pq\" (UniqueName: \"kubernetes.io/projected/a02480fd-a2d6-4364-b83f-e01dfa5a6676-kube-api-access-nz7pq\") pod \"controller-69bbfbf88f-8snqf\" (UID: \"a02480fd-a2d6-4364-b83f-e01dfa5a6676\") " pod="metallb-system/controller-69bbfbf88f-8snqf" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.904748 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a02480fd-a2d6-4364-b83f-e01dfa5a6676-cert\") pod \"controller-69bbfbf88f-8snqf\" (UID: \"a02480fd-a2d6-4364-b83f-e01dfa5a6676\") " pod="metallb-system/controller-69bbfbf88f-8snqf" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.904793 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a02480fd-a2d6-4364-b83f-e01dfa5a6676-metrics-certs\") pod \"controller-69bbfbf88f-8snqf\" (UID: \"a02480fd-a2d6-4364-b83f-e01dfa5a6676\") " pod="metallb-system/controller-69bbfbf88f-8snqf" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.910742 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a02480fd-a2d6-4364-b83f-e01dfa5a6676-cert\") pod \"controller-69bbfbf88f-8snqf\" (UID: \"a02480fd-a2d6-4364-b83f-e01dfa5a6676\") " pod="metallb-system/controller-69bbfbf88f-8snqf" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.914970 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a02480fd-a2d6-4364-b83f-e01dfa5a6676-metrics-certs\") pod \"controller-69bbfbf88f-8snqf\" (UID: \"a02480fd-a2d6-4364-b83f-e01dfa5a6676\") " pod="metallb-system/controller-69bbfbf88f-8snqf" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.922899 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz7pq\" (UniqueName: \"kubernetes.io/projected/a02480fd-a2d6-4364-b83f-e01dfa5a6676-kube-api-access-nz7pq\") pod \"controller-69bbfbf88f-8snqf\" (UID: \"a02480fd-a2d6-4364-b83f-e01dfa5a6676\") " pod="metallb-system/controller-69bbfbf88f-8snqf" Feb 23 18:46:08 crc kubenswrapper[4768]: I0223 18:46:08.954386 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-8snqf" Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.170659 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-8snqf"] Feb 23 18:46:09 crc kubenswrapper[4768]: W0223 18:46:09.175023 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda02480fd_a2d6_4364_b83f_e01dfa5a6676.slice/crio-d05b360a9a78f89a1a3f4cf096309982ddad80a34794be7797acbaa5de331fc9 WatchSource:0}: Error finding container d05b360a9a78f89a1a3f4cf096309982ddad80a34794be7797acbaa5de331fc9: Status 404 returned error can't find the container with id d05b360a9a78f89a1a3f4cf096309982ddad80a34794be7797acbaa5de331fc9 Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.299955 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm"] Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.309776 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-memberlist\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.309905 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-metrics-certs\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:09 crc kubenswrapper[4768]: E0223 18:46:09.309983 4768 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 23 18:46:09 crc kubenswrapper[4768]: E0223 18:46:09.310066 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-memberlist podName:bc147539-1205-4a1f-82d6-ca40f47d37d0 nodeName:}" failed. No retries permitted until 2026-02-23 18:46:10.310046387 +0000 UTC m=+765.700532207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-memberlist") pod "speaker-knv9f" (UID: "bc147539-1205-4a1f-82d6-ca40f47d37d0") : secret "metallb-memberlist" not found Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.314715 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-metrics-certs\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:09 crc kubenswrapper[4768]: W0223 18:46:09.318887 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06a269f1_e448_49da_b22d_7ef6bcfe31e1.slice/crio-8a58625e35a3ed1a9e0a23cb4956faf7e7dbfcc908e73dc009806dc4d9d7a9fe WatchSource:0}: Error finding container 8a58625e35a3ed1a9e0a23cb4956faf7e7dbfcc908e73dc009806dc4d9d7a9fe: Status 404 returned error can't find the container with id 8a58625e35a3ed1a9e0a23cb4956faf7e7dbfcc908e73dc009806dc4d9d7a9fe Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.544981 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.545055 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.853331 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lhjzv" event={"ID":"2c2223f2-8fac-4021-b096-4087bac80ab0","Type":"ContainerStarted","Data":"3b8394e99a96d3e61a430442b8a1e73d06d37c3e81285d6a88713dfbf3d4c765"} Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.854455 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm" event={"ID":"06a269f1-e448-49da-b22d-7ef6bcfe31e1","Type":"ContainerStarted","Data":"8a58625e35a3ed1a9e0a23cb4956faf7e7dbfcc908e73dc009806dc4d9d7a9fe"} Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.856143 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-8snqf" event={"ID":"a02480fd-a2d6-4364-b83f-e01dfa5a6676","Type":"ContainerStarted","Data":"0df4e3d68cf0db690eb6b34e43b5f4729fea537ae9fa41c9cf859780bd37b9f7"} Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.856185 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-8snqf" event={"ID":"a02480fd-a2d6-4364-b83f-e01dfa5a6676","Type":"ContainerStarted","Data":"70cc0b25095ba95950b86c501a750a004581db20d2dd25eb1e9c3ff4347ee84b"} Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.856197 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-8snqf" event={"ID":"a02480fd-a2d6-4364-b83f-e01dfa5a6676","Type":"ContainerStarted","Data":"d05b360a9a78f89a1a3f4cf096309982ddad80a34794be7797acbaa5de331fc9"} Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.856296 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-8snqf" Feb 23 18:46:09 crc kubenswrapper[4768]: I0223 18:46:09.877957 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-8snqf" podStartSLOduration=1.8779353909999998 podStartE2EDuration="1.877935391s" podCreationTimestamp="2026-02-23 18:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:46:09.873628584 +0000 UTC m=+765.264114384" watchObservedRunningTime="2026-02-23 18:46:09.877935391 +0000 UTC m=+765.268421191" Feb 23 18:46:10 crc kubenswrapper[4768]: I0223 18:46:10.326614 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-memberlist\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:10 crc kubenswrapper[4768]: I0223 18:46:10.332436 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bc147539-1205-4a1f-82d6-ca40f47d37d0-memberlist\") pod \"speaker-knv9f\" (UID: \"bc147539-1205-4a1f-82d6-ca40f47d37d0\") " pod="metallb-system/speaker-knv9f" Feb 23 18:46:10 crc kubenswrapper[4768]: I0223 18:46:10.564175 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-knv9f" Feb 23 18:46:10 crc kubenswrapper[4768]: W0223 18:46:10.600416 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc147539_1205_4a1f_82d6_ca40f47d37d0.slice/crio-54685ac90c83d04963069ea24e72370740deeea50296713a7de273a4c967c025 WatchSource:0}: Error finding container 54685ac90c83d04963069ea24e72370740deeea50296713a7de273a4c967c025: Status 404 returned error can't find the container with id 54685ac90c83d04963069ea24e72370740deeea50296713a7de273a4c967c025 Feb 23 18:46:10 crc kubenswrapper[4768]: I0223 18:46:10.867549 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-knv9f" event={"ID":"bc147539-1205-4a1f-82d6-ca40f47d37d0","Type":"ContainerStarted","Data":"54685ac90c83d04963069ea24e72370740deeea50296713a7de273a4c967c025"} Feb 23 18:46:11 crc kubenswrapper[4768]: I0223 18:46:11.884339 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-knv9f" event={"ID":"bc147539-1205-4a1f-82d6-ca40f47d37d0","Type":"ContainerStarted","Data":"bf773d9fac1ea9d330bc054372108dd12c3b2bc1b8c3199ae38e673d40caf740"} Feb 23 18:46:11 crc kubenswrapper[4768]: I0223 18:46:11.884390 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-knv9f" event={"ID":"bc147539-1205-4a1f-82d6-ca40f47d37d0","Type":"ContainerStarted","Data":"eff7d837556508a1d06fffc7fa2cbea9112f86a3107b41c500ce6df7798290af"} Feb 23 18:46:11 crc kubenswrapper[4768]: I0223 18:46:11.884492 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-knv9f" Feb 23 18:46:11 crc kubenswrapper[4768]: I0223 18:46:11.904579 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-knv9f" podStartSLOduration=3.904562097 podStartE2EDuration="3.904562097s" podCreationTimestamp="2026-02-23 18:46:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:46:11.900980219 +0000 UTC m=+767.291466019" watchObservedRunningTime="2026-02-23 18:46:11.904562097 +0000 UTC m=+767.295047897" Feb 23 18:46:16 crc kubenswrapper[4768]: I0223 18:46:16.926472 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm" event={"ID":"06a269f1-e448-49da-b22d-7ef6bcfe31e1","Type":"ContainerStarted","Data":"7721637bb4c89cdab70442e5a5a3ac6e18898055d5caedb645aad74ffdb102c6"} Feb 23 18:46:16 crc kubenswrapper[4768]: I0223 18:46:16.927167 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm" Feb 23 18:46:16 crc kubenswrapper[4768]: I0223 18:46:16.928626 4768 generic.go:334] "Generic (PLEG): container finished" podID="2c2223f2-8fac-4021-b096-4087bac80ab0" containerID="f6cad061dec6ef5e649d4cbccbcfab782a89e15ab7d6c20999f34d4e5d724e80" exitCode=0 Feb 23 18:46:16 crc kubenswrapper[4768]: I0223 18:46:16.928883 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lhjzv" event={"ID":"2c2223f2-8fac-4021-b096-4087bac80ab0","Type":"ContainerDied","Data":"f6cad061dec6ef5e649d4cbccbcfab782a89e15ab7d6c20999f34d4e5d724e80"} Feb 23 18:46:16 crc kubenswrapper[4768]: I0223 18:46:16.953328 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm" podStartSLOduration=2.075866978 podStartE2EDuration="8.953235923s" podCreationTimestamp="2026-02-23 18:46:08 +0000 UTC" firstStartedPulling="2026-02-23 18:46:09.321187733 +0000 UTC m=+764.711673543" lastFinishedPulling="2026-02-23 18:46:16.198556688 +0000 UTC m=+771.589042488" observedRunningTime="2026-02-23 18:46:16.950771685 +0000 UTC m=+772.341257525" watchObservedRunningTime="2026-02-23 18:46:16.953235923 +0000 UTC m=+772.343721733" Feb 23 18:46:17 crc kubenswrapper[4768]: I0223 18:46:17.936671 4768 generic.go:334] "Generic (PLEG): container finished" podID="2c2223f2-8fac-4021-b096-4087bac80ab0" containerID="cced70df86101b068188b1f46ac83738f63d7247ffe0f2b22aa60484122420a6" exitCode=0 Feb 23 18:46:17 crc kubenswrapper[4768]: I0223 18:46:17.936713 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lhjzv" event={"ID":"2c2223f2-8fac-4021-b096-4087bac80ab0","Type":"ContainerDied","Data":"cced70df86101b068188b1f46ac83738f63d7247ffe0f2b22aa60484122420a6"} Feb 23 18:46:18 crc kubenswrapper[4768]: I0223 18:46:18.949142 4768 generic.go:334] "Generic (PLEG): container finished" podID="2c2223f2-8fac-4021-b096-4087bac80ab0" containerID="997c9ad505ebc05a12416766435f877af0d737e1b478ff88eb38614e81bdecd0" exitCode=0 Feb 23 18:46:18 crc kubenswrapper[4768]: I0223 18:46:18.949239 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lhjzv" event={"ID":"2c2223f2-8fac-4021-b096-4087bac80ab0","Type":"ContainerDied","Data":"997c9ad505ebc05a12416766435f877af0d737e1b478ff88eb38614e81bdecd0"} Feb 23 18:46:19 crc kubenswrapper[4768]: I0223 18:46:19.970183 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lhjzv" event={"ID":"2c2223f2-8fac-4021-b096-4087bac80ab0","Type":"ContainerStarted","Data":"da9e6dfacde342c24726688dbc9791940cdb14a403e4700776abc04d87a77600"} Feb 23 18:46:19 crc kubenswrapper[4768]: I0223 18:46:19.970653 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lhjzv" event={"ID":"2c2223f2-8fac-4021-b096-4087bac80ab0","Type":"ContainerStarted","Data":"056f45ae2b934ceb0fff9f83f200c48b134ebf8c7d74f841789bb662505c41b1"} Feb 23 18:46:19 crc kubenswrapper[4768]: I0223 18:46:19.970671 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lhjzv" event={"ID":"2c2223f2-8fac-4021-b096-4087bac80ab0","Type":"ContainerStarted","Data":"57f6dd53d63642dd24154e364d0b6f47140c1a68c3217d4b82aca14a98aae9a6"} Feb 23 18:46:19 crc kubenswrapper[4768]: I0223 18:46:19.970688 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lhjzv" event={"ID":"2c2223f2-8fac-4021-b096-4087bac80ab0","Type":"ContainerStarted","Data":"2a480d949738f290e9f600dd5e63ca90cd6334b16cc23a682147256e8460ce8a"} Feb 23 18:46:19 crc kubenswrapper[4768]: I0223 18:46:19.970700 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lhjzv" event={"ID":"2c2223f2-8fac-4021-b096-4087bac80ab0","Type":"ContainerStarted","Data":"3917f51ef8bc3b8c422bbb05414b9dfd8e4cc05982e4c1cb386f323fa164f37c"} Feb 23 18:46:20 crc kubenswrapper[4768]: I0223 18:46:20.569032 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-knv9f" Feb 23 18:46:20 crc kubenswrapper[4768]: I0223 18:46:20.983440 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lhjzv" event={"ID":"2c2223f2-8fac-4021-b096-4087bac80ab0","Type":"ContainerStarted","Data":"3ee06cc2cba8f57255ca93e228067c4eda4100dbb97804b5e270224c20d62065"} Feb 23 18:46:20 crc kubenswrapper[4768]: I0223 18:46:20.984677 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:21 crc kubenswrapper[4768]: I0223 18:46:21.014698 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-lhjzv" podStartSLOduration=5.8055194 podStartE2EDuration="13.014669547s" podCreationTimestamp="2026-02-23 18:46:08 +0000 UTC" firstStartedPulling="2026-02-23 18:46:08.980704882 +0000 UTC m=+764.371190672" lastFinishedPulling="2026-02-23 18:46:16.189855019 +0000 UTC m=+771.580340819" observedRunningTime="2026-02-23 18:46:21.014387489 +0000 UTC m=+776.404873309" watchObservedRunningTime="2026-02-23 18:46:21.014669547 +0000 UTC m=+776.405155357" Feb 23 18:46:23 crc kubenswrapper[4768]: I0223 18:46:23.841710 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:23 crc kubenswrapper[4768]: I0223 18:46:23.890232 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:26 crc kubenswrapper[4768]: I0223 18:46:26.709985 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2cmlj"] Feb 23 18:46:26 crc kubenswrapper[4768]: I0223 18:46:26.711097 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2cmlj" Feb 23 18:46:26 crc kubenswrapper[4768]: I0223 18:46:26.712403 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 23 18:46:26 crc kubenswrapper[4768]: I0223 18:46:26.712957 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-d6pxp" Feb 23 18:46:26 crc kubenswrapper[4768]: I0223 18:46:26.717573 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2cmlj"] Feb 23 18:46:26 crc kubenswrapper[4768]: I0223 18:46:26.720059 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 23 18:46:26 crc kubenswrapper[4768]: I0223 18:46:26.843985 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qh5d\" (UniqueName: \"kubernetes.io/projected/86b701d0-e1c7-4acd-a3f5-6aeed522c09a-kube-api-access-4qh5d\") pod \"openstack-operator-index-2cmlj\" (UID: \"86b701d0-e1c7-4acd-a3f5-6aeed522c09a\") " pod="openstack-operators/openstack-operator-index-2cmlj" Feb 23 18:46:26 crc kubenswrapper[4768]: I0223 18:46:26.946318 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qh5d\" (UniqueName: \"kubernetes.io/projected/86b701d0-e1c7-4acd-a3f5-6aeed522c09a-kube-api-access-4qh5d\") pod \"openstack-operator-index-2cmlj\" (UID: \"86b701d0-e1c7-4acd-a3f5-6aeed522c09a\") " pod="openstack-operators/openstack-operator-index-2cmlj" Feb 23 18:46:26 crc kubenswrapper[4768]: I0223 18:46:26.976221 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qh5d\" (UniqueName: \"kubernetes.io/projected/86b701d0-e1c7-4acd-a3f5-6aeed522c09a-kube-api-access-4qh5d\") pod \"openstack-operator-index-2cmlj\" (UID: \"86b701d0-e1c7-4acd-a3f5-6aeed522c09a\") " pod="openstack-operators/openstack-operator-index-2cmlj" Feb 23 18:46:27 crc kubenswrapper[4768]: I0223 18:46:27.026494 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2cmlj" Feb 23 18:46:27 crc kubenswrapper[4768]: I0223 18:46:27.303869 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2cmlj"] Feb 23 18:46:27 crc kubenswrapper[4768]: W0223 18:46:27.318295 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86b701d0_e1c7_4acd_a3f5_6aeed522c09a.slice/crio-93d7cd9458ec1eb72cd6c06e13884bb1b2b6c9b289c6bfca0348f0f01b54df96 WatchSource:0}: Error finding container 93d7cd9458ec1eb72cd6c06e13884bb1b2b6c9b289c6bfca0348f0f01b54df96: Status 404 returned error can't find the container with id 93d7cd9458ec1eb72cd6c06e13884bb1b2b6c9b289c6bfca0348f0f01b54df96 Feb 23 18:46:28 crc kubenswrapper[4768]: I0223 18:46:28.053138 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2cmlj" event={"ID":"86b701d0-e1c7-4acd-a3f5-6aeed522c09a","Type":"ContainerStarted","Data":"93d7cd9458ec1eb72cd6c06e13884bb1b2b6c9b289c6bfca0348f0f01b54df96"} Feb 23 18:46:28 crc kubenswrapper[4768]: I0223 18:46:28.868878 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-sglwm" Feb 23 18:46:28 crc kubenswrapper[4768]: I0223 18:46:28.960640 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-8snqf" Feb 23 18:46:31 crc kubenswrapper[4768]: I0223 18:46:31.900286 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2cmlj"] Feb 23 18:46:32 crc kubenswrapper[4768]: I0223 18:46:32.509069 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cwqr7"] Feb 23 18:46:32 crc kubenswrapper[4768]: I0223 18:46:32.509780 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cwqr7" Feb 23 18:46:32 crc kubenswrapper[4768]: I0223 18:46:32.524926 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cwqr7"] Feb 23 18:46:32 crc kubenswrapper[4768]: I0223 18:46:32.638987 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls6jr\" (UniqueName: \"kubernetes.io/projected/95798783-c266-4139-a43a-b4fbf879c1b8-kube-api-access-ls6jr\") pod \"openstack-operator-index-cwqr7\" (UID: \"95798783-c266-4139-a43a-b4fbf879c1b8\") " pod="openstack-operators/openstack-operator-index-cwqr7" Feb 23 18:46:32 crc kubenswrapper[4768]: I0223 18:46:32.740497 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls6jr\" (UniqueName: \"kubernetes.io/projected/95798783-c266-4139-a43a-b4fbf879c1b8-kube-api-access-ls6jr\") pod \"openstack-operator-index-cwqr7\" (UID: \"95798783-c266-4139-a43a-b4fbf879c1b8\") " pod="openstack-operators/openstack-operator-index-cwqr7" Feb 23 18:46:32 crc kubenswrapper[4768]: I0223 18:46:32.766672 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls6jr\" (UniqueName: \"kubernetes.io/projected/95798783-c266-4139-a43a-b4fbf879c1b8-kube-api-access-ls6jr\") pod \"openstack-operator-index-cwqr7\" (UID: \"95798783-c266-4139-a43a-b4fbf879c1b8\") " pod="openstack-operators/openstack-operator-index-cwqr7" Feb 23 18:46:32 crc kubenswrapper[4768]: I0223 18:46:32.838329 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cwqr7" Feb 23 18:46:32 crc kubenswrapper[4768]: I0223 18:46:32.938200 4768 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 23 18:46:33 crc kubenswrapper[4768]: I0223 18:46:33.084342 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2cmlj" event={"ID":"86b701d0-e1c7-4acd-a3f5-6aeed522c09a","Type":"ContainerStarted","Data":"3ae874a61aea7473465e5bef9b03bed1112573cddee588d2df67c694bb56dab3"} Feb 23 18:46:33 crc kubenswrapper[4768]: I0223 18:46:33.084473 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-2cmlj" podUID="86b701d0-e1c7-4acd-a3f5-6aeed522c09a" containerName="registry-server" containerID="cri-o://3ae874a61aea7473465e5bef9b03bed1112573cddee588d2df67c694bb56dab3" gracePeriod=2 Feb 23 18:46:33 crc kubenswrapper[4768]: I0223 18:46:33.112780 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2cmlj" podStartSLOduration=2.234007146 podStartE2EDuration="7.112762802s" podCreationTimestamp="2026-02-23 18:46:26 +0000 UTC" firstStartedPulling="2026-02-23 18:46:27.320534793 +0000 UTC m=+782.711020603" lastFinishedPulling="2026-02-23 18:46:32.199290449 +0000 UTC m=+787.589776259" observedRunningTime="2026-02-23 18:46:33.108096004 +0000 UTC m=+788.498581814" watchObservedRunningTime="2026-02-23 18:46:33.112762802 +0000 UTC m=+788.503248602" Feb 23 18:46:33 crc kubenswrapper[4768]: I0223 18:46:33.407891 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cwqr7"] Feb 23 18:46:33 crc kubenswrapper[4768]: I0223 18:46:33.503299 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2cmlj" Feb 23 18:46:33 crc kubenswrapper[4768]: I0223 18:46:33.652783 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qh5d\" (UniqueName: \"kubernetes.io/projected/86b701d0-e1c7-4acd-a3f5-6aeed522c09a-kube-api-access-4qh5d\") pod \"86b701d0-e1c7-4acd-a3f5-6aeed522c09a\" (UID: \"86b701d0-e1c7-4acd-a3f5-6aeed522c09a\") " Feb 23 18:46:33 crc kubenswrapper[4768]: I0223 18:46:33.674670 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86b701d0-e1c7-4acd-a3f5-6aeed522c09a-kube-api-access-4qh5d" (OuterVolumeSpecName: "kube-api-access-4qh5d") pod "86b701d0-e1c7-4acd-a3f5-6aeed522c09a" (UID: "86b701d0-e1c7-4acd-a3f5-6aeed522c09a"). InnerVolumeSpecName "kube-api-access-4qh5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:46:33 crc kubenswrapper[4768]: I0223 18:46:33.755392 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qh5d\" (UniqueName: \"kubernetes.io/projected/86b701d0-e1c7-4acd-a3f5-6aeed522c09a-kube-api-access-4qh5d\") on node \"crc\" DevicePath \"\"" Feb 23 18:46:34 crc kubenswrapper[4768]: I0223 18:46:34.096707 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cwqr7" event={"ID":"95798783-c266-4139-a43a-b4fbf879c1b8","Type":"ContainerStarted","Data":"aa9df485c56927124d3e7272dee0a3ebc16f20e79158bcbc527e38826059b149"} Feb 23 18:46:34 crc kubenswrapper[4768]: I0223 18:46:34.099952 4768 generic.go:334] "Generic (PLEG): container finished" podID="86b701d0-e1c7-4acd-a3f5-6aeed522c09a" containerID="3ae874a61aea7473465e5bef9b03bed1112573cddee588d2df67c694bb56dab3" exitCode=0 Feb 23 18:46:34 crc kubenswrapper[4768]: I0223 18:46:34.100002 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2cmlj" event={"ID":"86b701d0-e1c7-4acd-a3f5-6aeed522c09a","Type":"ContainerDied","Data":"3ae874a61aea7473465e5bef9b03bed1112573cddee588d2df67c694bb56dab3"} Feb 23 18:46:34 crc kubenswrapper[4768]: I0223 18:46:34.100029 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2cmlj" event={"ID":"86b701d0-e1c7-4acd-a3f5-6aeed522c09a","Type":"ContainerDied","Data":"93d7cd9458ec1eb72cd6c06e13884bb1b2b6c9b289c6bfca0348f0f01b54df96"} Feb 23 18:46:34 crc kubenswrapper[4768]: I0223 18:46:34.100059 4768 scope.go:117] "RemoveContainer" containerID="3ae874a61aea7473465e5bef9b03bed1112573cddee588d2df67c694bb56dab3" Feb 23 18:46:34 crc kubenswrapper[4768]: I0223 18:46:34.100292 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2cmlj" Feb 23 18:46:34 crc kubenswrapper[4768]: I0223 18:46:34.119499 4768 scope.go:117] "RemoveContainer" containerID="3ae874a61aea7473465e5bef9b03bed1112573cddee588d2df67c694bb56dab3" Feb 23 18:46:34 crc kubenswrapper[4768]: E0223 18:46:34.120043 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae874a61aea7473465e5bef9b03bed1112573cddee588d2df67c694bb56dab3\": container with ID starting with 3ae874a61aea7473465e5bef9b03bed1112573cddee588d2df67c694bb56dab3 not found: ID does not exist" containerID="3ae874a61aea7473465e5bef9b03bed1112573cddee588d2df67c694bb56dab3" Feb 23 18:46:34 crc kubenswrapper[4768]: I0223 18:46:34.120101 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae874a61aea7473465e5bef9b03bed1112573cddee588d2df67c694bb56dab3"} err="failed to get container status \"3ae874a61aea7473465e5bef9b03bed1112573cddee588d2df67c694bb56dab3\": rpc error: code = NotFound desc = could not find container \"3ae874a61aea7473465e5bef9b03bed1112573cddee588d2df67c694bb56dab3\": container with ID starting with 3ae874a61aea7473465e5bef9b03bed1112573cddee588d2df67c694bb56dab3 not found: ID does not exist" Feb 23 18:46:34 crc kubenswrapper[4768]: I0223 18:46:34.128418 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2cmlj"] Feb 23 18:46:34 crc kubenswrapper[4768]: I0223 18:46:34.136946 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-2cmlj"] Feb 23 18:46:35 crc kubenswrapper[4768]: I0223 18:46:35.111715 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cwqr7" event={"ID":"95798783-c266-4139-a43a-b4fbf879c1b8","Type":"ContainerStarted","Data":"75f4279a82306cc148c5120e427c8a7f4473b233dea5eb7966c647bfa2e8c425"} Feb 23 18:46:35 crc kubenswrapper[4768]: I0223 18:46:35.144626 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cwqr7" podStartSLOduration=2.689497723 podStartE2EDuration="3.144602935s" podCreationTimestamp="2026-02-23 18:46:32 +0000 UTC" firstStartedPulling="2026-02-23 18:46:33.426431424 +0000 UTC m=+788.816917214" lastFinishedPulling="2026-02-23 18:46:33.881536616 +0000 UTC m=+789.272022426" observedRunningTime="2026-02-23 18:46:35.137054256 +0000 UTC m=+790.527540106" watchObservedRunningTime="2026-02-23 18:46:35.144602935 +0000 UTC m=+790.535088775" Feb 23 18:46:35 crc kubenswrapper[4768]: I0223 18:46:35.317118 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86b701d0-e1c7-4acd-a3f5-6aeed522c09a" path="/var/lib/kubelet/pods/86b701d0-e1c7-4acd-a3f5-6aeed522c09a/volumes" Feb 23 18:46:38 crc kubenswrapper[4768]: I0223 18:46:38.844081 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lhjzv" Feb 23 18:46:39 crc kubenswrapper[4768]: I0223 18:46:39.545565 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:46:39 crc kubenswrapper[4768]: I0223 18:46:39.545677 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:46:42 crc kubenswrapper[4768]: I0223 18:46:42.838744 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cwqr7" Feb 23 18:46:42 crc kubenswrapper[4768]: I0223 18:46:42.839163 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cwqr7" Feb 23 18:46:42 crc kubenswrapper[4768]: I0223 18:46:42.880307 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cwqr7" Feb 23 18:46:43 crc kubenswrapper[4768]: I0223 18:46:43.217971 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cwqr7" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.194037 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c"] Feb 23 18:46:45 crc kubenswrapper[4768]: E0223 18:46:45.194565 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86b701d0-e1c7-4acd-a3f5-6aeed522c09a" containerName="registry-server" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.194579 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="86b701d0-e1c7-4acd-a3f5-6aeed522c09a" containerName="registry-server" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.194701 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="86b701d0-e1c7-4acd-a3f5-6aeed522c09a" containerName="registry-server" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.195452 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.197444 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-r8dbz" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.203733 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c"] Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.325696 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdldh\" (UniqueName: \"kubernetes.io/projected/1931c996-5088-425f-9e39-ef898c8742d8-kube-api-access-zdldh\") pod \"c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c\" (UID: \"1931c996-5088-425f-9e39-ef898c8742d8\") " pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.325855 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1931c996-5088-425f-9e39-ef898c8742d8-util\") pod \"c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c\" (UID: \"1931c996-5088-425f-9e39-ef898c8742d8\") " pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.325912 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1931c996-5088-425f-9e39-ef898c8742d8-bundle\") pod \"c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c\" (UID: \"1931c996-5088-425f-9e39-ef898c8742d8\") " pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.427552 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1931c996-5088-425f-9e39-ef898c8742d8-bundle\") pod \"c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c\" (UID: \"1931c996-5088-425f-9e39-ef898c8742d8\") " pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.427611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdldh\" (UniqueName: \"kubernetes.io/projected/1931c996-5088-425f-9e39-ef898c8742d8-kube-api-access-zdldh\") pod \"c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c\" (UID: \"1931c996-5088-425f-9e39-ef898c8742d8\") " pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.427715 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1931c996-5088-425f-9e39-ef898c8742d8-util\") pod \"c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c\" (UID: \"1931c996-5088-425f-9e39-ef898c8742d8\") " pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.428478 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1931c996-5088-425f-9e39-ef898c8742d8-bundle\") pod \"c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c\" (UID: \"1931c996-5088-425f-9e39-ef898c8742d8\") " pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.428516 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1931c996-5088-425f-9e39-ef898c8742d8-util\") pod \"c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c\" (UID: \"1931c996-5088-425f-9e39-ef898c8742d8\") " pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.449968 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdldh\" (UniqueName: \"kubernetes.io/projected/1931c996-5088-425f-9e39-ef898c8742d8-kube-api-access-zdldh\") pod \"c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c\" (UID: \"1931c996-5088-425f-9e39-ef898c8742d8\") " pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.511518 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" Feb 23 18:46:45 crc kubenswrapper[4768]: I0223 18:46:45.963007 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c"] Feb 23 18:46:45 crc kubenswrapper[4768]: W0223 18:46:45.969761 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1931c996_5088_425f_9e39_ef898c8742d8.slice/crio-9fca8109a88db7f9d5dda29a0504543278234b12ca1d53be1642011867f1d965 WatchSource:0}: Error finding container 9fca8109a88db7f9d5dda29a0504543278234b12ca1d53be1642011867f1d965: Status 404 returned error can't find the container with id 9fca8109a88db7f9d5dda29a0504543278234b12ca1d53be1642011867f1d965 Feb 23 18:46:46 crc kubenswrapper[4768]: I0223 18:46:46.189050 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" event={"ID":"1931c996-5088-425f-9e39-ef898c8742d8","Type":"ContainerStarted","Data":"1c07ef0c197dcc9615b14ddee8d9af71343066a4e2c691ae181ad8d01f452acb"} Feb 23 18:46:46 crc kubenswrapper[4768]: I0223 18:46:46.189471 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" event={"ID":"1931c996-5088-425f-9e39-ef898c8742d8","Type":"ContainerStarted","Data":"9fca8109a88db7f9d5dda29a0504543278234b12ca1d53be1642011867f1d965"} Feb 23 18:46:47 crc kubenswrapper[4768]: I0223 18:46:47.198529 4768 generic.go:334] "Generic (PLEG): container finished" podID="1931c996-5088-425f-9e39-ef898c8742d8" containerID="1c07ef0c197dcc9615b14ddee8d9af71343066a4e2c691ae181ad8d01f452acb" exitCode=0 Feb 23 18:46:47 crc kubenswrapper[4768]: I0223 18:46:47.198601 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" event={"ID":"1931c996-5088-425f-9e39-ef898c8742d8","Type":"ContainerDied","Data":"1c07ef0c197dcc9615b14ddee8d9af71343066a4e2c691ae181ad8d01f452acb"} Feb 23 18:46:48 crc kubenswrapper[4768]: I0223 18:46:48.209443 4768 generic.go:334] "Generic (PLEG): container finished" podID="1931c996-5088-425f-9e39-ef898c8742d8" containerID="f0d9251c262fc1e195c127a78f87fb18421649699e50897de1a7bda3c32aef9c" exitCode=0 Feb 23 18:46:48 crc kubenswrapper[4768]: I0223 18:46:48.209575 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" event={"ID":"1931c996-5088-425f-9e39-ef898c8742d8","Type":"ContainerDied","Data":"f0d9251c262fc1e195c127a78f87fb18421649699e50897de1a7bda3c32aef9c"} Feb 23 18:46:49 crc kubenswrapper[4768]: I0223 18:46:49.218359 4768 generic.go:334] "Generic (PLEG): container finished" podID="1931c996-5088-425f-9e39-ef898c8742d8" containerID="ce51edb5783cf8576ea9fae61d57dec466759fac55565be06be046db6c83bb3e" exitCode=0 Feb 23 18:46:49 crc kubenswrapper[4768]: I0223 18:46:49.218589 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" event={"ID":"1931c996-5088-425f-9e39-ef898c8742d8","Type":"ContainerDied","Data":"ce51edb5783cf8576ea9fae61d57dec466759fac55565be06be046db6c83bb3e"} Feb 23 18:46:50 crc kubenswrapper[4768]: I0223 18:46:50.464341 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" Feb 23 18:46:50 crc kubenswrapper[4768]: I0223 18:46:50.598625 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1931c996-5088-425f-9e39-ef898c8742d8-util\") pod \"1931c996-5088-425f-9e39-ef898c8742d8\" (UID: \"1931c996-5088-425f-9e39-ef898c8742d8\") " Feb 23 18:46:50 crc kubenswrapper[4768]: I0223 18:46:50.598727 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdldh\" (UniqueName: \"kubernetes.io/projected/1931c996-5088-425f-9e39-ef898c8742d8-kube-api-access-zdldh\") pod \"1931c996-5088-425f-9e39-ef898c8742d8\" (UID: \"1931c996-5088-425f-9e39-ef898c8742d8\") " Feb 23 18:46:50 crc kubenswrapper[4768]: I0223 18:46:50.598775 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1931c996-5088-425f-9e39-ef898c8742d8-bundle\") pod \"1931c996-5088-425f-9e39-ef898c8742d8\" (UID: \"1931c996-5088-425f-9e39-ef898c8742d8\") " Feb 23 18:46:50 crc kubenswrapper[4768]: I0223 18:46:50.599528 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1931c996-5088-425f-9e39-ef898c8742d8-bundle" (OuterVolumeSpecName: "bundle") pod "1931c996-5088-425f-9e39-ef898c8742d8" (UID: "1931c996-5088-425f-9e39-ef898c8742d8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:46:50 crc kubenswrapper[4768]: I0223 18:46:50.606432 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1931c996-5088-425f-9e39-ef898c8742d8-kube-api-access-zdldh" (OuterVolumeSpecName: "kube-api-access-zdldh") pod "1931c996-5088-425f-9e39-ef898c8742d8" (UID: "1931c996-5088-425f-9e39-ef898c8742d8"). InnerVolumeSpecName "kube-api-access-zdldh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:46:50 crc kubenswrapper[4768]: I0223 18:46:50.617396 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1931c996-5088-425f-9e39-ef898c8742d8-util" (OuterVolumeSpecName: "util") pod "1931c996-5088-425f-9e39-ef898c8742d8" (UID: "1931c996-5088-425f-9e39-ef898c8742d8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:46:50 crc kubenswrapper[4768]: I0223 18:46:50.699964 4768 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1931c996-5088-425f-9e39-ef898c8742d8-util\") on node \"crc\" DevicePath \"\"" Feb 23 18:46:50 crc kubenswrapper[4768]: I0223 18:46:50.700003 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdldh\" (UniqueName: \"kubernetes.io/projected/1931c996-5088-425f-9e39-ef898c8742d8-kube-api-access-zdldh\") on node \"crc\" DevicePath \"\"" Feb 23 18:46:50 crc kubenswrapper[4768]: I0223 18:46:50.700019 4768 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1931c996-5088-425f-9e39-ef898c8742d8-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:46:51 crc kubenswrapper[4768]: I0223 18:46:51.235000 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" event={"ID":"1931c996-5088-425f-9e39-ef898c8742d8","Type":"ContainerDied","Data":"9fca8109a88db7f9d5dda29a0504543278234b12ca1d53be1642011867f1d965"} Feb 23 18:46:51 crc kubenswrapper[4768]: I0223 18:46:51.235048 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c" Feb 23 18:46:51 crc kubenswrapper[4768]: I0223 18:46:51.235062 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fca8109a88db7f9d5dda29a0504543278234b12ca1d53be1642011867f1d965" Feb 23 18:46:54 crc kubenswrapper[4768]: I0223 18:46:54.748346 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5dfcfd9b6-jhz5n"] Feb 23 18:46:54 crc kubenswrapper[4768]: E0223 18:46:54.748923 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1931c996-5088-425f-9e39-ef898c8742d8" containerName="extract" Feb 23 18:46:54 crc kubenswrapper[4768]: I0223 18:46:54.748940 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1931c996-5088-425f-9e39-ef898c8742d8" containerName="extract" Feb 23 18:46:54 crc kubenswrapper[4768]: E0223 18:46:54.748955 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1931c996-5088-425f-9e39-ef898c8742d8" containerName="util" Feb 23 18:46:54 crc kubenswrapper[4768]: I0223 18:46:54.748962 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1931c996-5088-425f-9e39-ef898c8742d8" containerName="util" Feb 23 18:46:54 crc kubenswrapper[4768]: E0223 18:46:54.748980 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1931c996-5088-425f-9e39-ef898c8742d8" containerName="pull" Feb 23 18:46:54 crc kubenswrapper[4768]: I0223 18:46:54.748990 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1931c996-5088-425f-9e39-ef898c8742d8" containerName="pull" Feb 23 18:46:54 crc kubenswrapper[4768]: I0223 18:46:54.749104 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1931c996-5088-425f-9e39-ef898c8742d8" containerName="extract" Feb 23 18:46:54 crc kubenswrapper[4768]: I0223 18:46:54.749655 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5dfcfd9b6-jhz5n" Feb 23 18:46:54 crc kubenswrapper[4768]: I0223 18:46:54.755194 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-4mzvm" Feb 23 18:46:54 crc kubenswrapper[4768]: I0223 18:46:54.775750 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5dfcfd9b6-jhz5n"] Feb 23 18:46:54 crc kubenswrapper[4768]: I0223 18:46:54.860572 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9qhs\" (UniqueName: \"kubernetes.io/projected/80dc5267-2395-41a2-8e61-152b0acbc24c-kube-api-access-s9qhs\") pod \"openstack-operator-controller-init-5dfcfd9b6-jhz5n\" (UID: \"80dc5267-2395-41a2-8e61-152b0acbc24c\") " pod="openstack-operators/openstack-operator-controller-init-5dfcfd9b6-jhz5n" Feb 23 18:46:54 crc kubenswrapper[4768]: I0223 18:46:54.961808 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9qhs\" (UniqueName: \"kubernetes.io/projected/80dc5267-2395-41a2-8e61-152b0acbc24c-kube-api-access-s9qhs\") pod \"openstack-operator-controller-init-5dfcfd9b6-jhz5n\" (UID: \"80dc5267-2395-41a2-8e61-152b0acbc24c\") " pod="openstack-operators/openstack-operator-controller-init-5dfcfd9b6-jhz5n" Feb 23 18:46:54 crc kubenswrapper[4768]: I0223 18:46:54.986215 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9qhs\" (UniqueName: \"kubernetes.io/projected/80dc5267-2395-41a2-8e61-152b0acbc24c-kube-api-access-s9qhs\") pod \"openstack-operator-controller-init-5dfcfd9b6-jhz5n\" (UID: \"80dc5267-2395-41a2-8e61-152b0acbc24c\") " pod="openstack-operators/openstack-operator-controller-init-5dfcfd9b6-jhz5n" Feb 23 18:46:55 crc kubenswrapper[4768]: I0223 18:46:55.113223 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5dfcfd9b6-jhz5n" Feb 23 18:46:55 crc kubenswrapper[4768]: I0223 18:46:55.566894 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5dfcfd9b6-jhz5n"] Feb 23 18:46:55 crc kubenswrapper[4768]: W0223 18:46:55.587183 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80dc5267_2395_41a2_8e61_152b0acbc24c.slice/crio-43f17ec343a92da4ac2bdc84137847c54ed8c1a49afc6d3830459c9887979511 WatchSource:0}: Error finding container 43f17ec343a92da4ac2bdc84137847c54ed8c1a49afc6d3830459c9887979511: Status 404 returned error can't find the container with id 43f17ec343a92da4ac2bdc84137847c54ed8c1a49afc6d3830459c9887979511 Feb 23 18:46:56 crc kubenswrapper[4768]: I0223 18:46:56.267916 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5dfcfd9b6-jhz5n" event={"ID":"80dc5267-2395-41a2-8e61-152b0acbc24c","Type":"ContainerStarted","Data":"43f17ec343a92da4ac2bdc84137847c54ed8c1a49afc6d3830459c9887979511"} Feb 23 18:47:00 crc kubenswrapper[4768]: I0223 18:47:00.302585 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5dfcfd9b6-jhz5n" event={"ID":"80dc5267-2395-41a2-8e61-152b0acbc24c","Type":"ContainerStarted","Data":"ea5fa45b30676508b87f498afdbdcff92ec9ed22d86f5a43c8aa9ffc58a38ffc"} Feb 23 18:47:00 crc kubenswrapper[4768]: I0223 18:47:00.303646 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5dfcfd9b6-jhz5n" Feb 23 18:47:00 crc kubenswrapper[4768]: I0223 18:47:00.344646 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5dfcfd9b6-jhz5n" podStartSLOduration=2.615753977 podStartE2EDuration="6.344626655s" podCreationTimestamp="2026-02-23 18:46:54 +0000 UTC" firstStartedPulling="2026-02-23 18:46:55.589036404 +0000 UTC m=+810.979522204" lastFinishedPulling="2026-02-23 18:46:59.317909072 +0000 UTC m=+814.708394882" observedRunningTime="2026-02-23 18:47:00.340731148 +0000 UTC m=+815.731216988" watchObservedRunningTime="2026-02-23 18:47:00.344626655 +0000 UTC m=+815.735112465" Feb 23 18:47:05 crc kubenswrapper[4768]: I0223 18:47:05.117020 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5dfcfd9b6-jhz5n" Feb 23 18:47:09 crc kubenswrapper[4768]: I0223 18:47:09.545080 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:47:09 crc kubenswrapper[4768]: I0223 18:47:09.545827 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:47:09 crc kubenswrapper[4768]: I0223 18:47:09.545901 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:47:09 crc kubenswrapper[4768]: I0223 18:47:09.546864 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"662c0ef856356498cd584cb766a97a6b53369859da285f23355df329a456b4b9"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:47:09 crc kubenswrapper[4768]: I0223 18:47:09.546967 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://662c0ef856356498cd584cb766a97a6b53369859da285f23355df329a456b4b9" gracePeriod=600 Feb 23 18:47:10 crc kubenswrapper[4768]: I0223 18:47:10.374409 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="662c0ef856356498cd584cb766a97a6b53369859da285f23355df329a456b4b9" exitCode=0 Feb 23 18:47:10 crc kubenswrapper[4768]: I0223 18:47:10.374505 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"662c0ef856356498cd584cb766a97a6b53369859da285f23355df329a456b4b9"} Feb 23 18:47:10 crc kubenswrapper[4768]: I0223 18:47:10.374701 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"786bab7731b00b23523b13fa7e10ac65a60b043dfe0ad9d117ecf340ff5d7aa0"} Feb 23 18:47:10 crc kubenswrapper[4768]: I0223 18:47:10.374729 4768 scope.go:117] "RemoveContainer" containerID="09b667fa4dfa235f998d331776823655eb1fc751a363a9a542f56bfb1bf14fa1" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.172190 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cj6bl"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.174130 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cj6bl" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.183415 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cj6bl"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.215236 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ffjxk" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.231882 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-mng89"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.232769 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-mng89" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.240382 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-g59jd" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.251693 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-mng89"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.267104 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-cprlh"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.268595 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cprlh" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.271795 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mcp2f" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.292607 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-cprlh"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.313051 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxkdt\" (UniqueName: \"kubernetes.io/projected/97f25c43-f624-4320-b34b-789df5cab5f3-kube-api-access-gxkdt\") pod \"barbican-operator-controller-manager-868647ff47-cj6bl\" (UID: \"97f25c43-f624-4320-b34b-789df5cab5f3\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cj6bl" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.313144 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdjmn\" (UniqueName: \"kubernetes.io/projected/aba58523-2fad-45af-87ee-a347b586ad4b-kube-api-access-fdjmn\") pod \"designate-operator-controller-manager-6d8bf5c495-cprlh\" (UID: \"aba58523-2fad-45af-87ee-a347b586ad4b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cprlh" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.313182 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2cck\" (UniqueName: \"kubernetes.io/projected/b16ba816-bafa-430e-b18a-5afa27bc0abb-kube-api-access-k2cck\") pod \"cinder-operator-controller-manager-55d77d7b5c-mng89\" (UID: \"b16ba816-bafa-430e-b18a-5afa27bc0abb\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-mng89" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.330367 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-chqsr"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.349868 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-chqsr" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.365628 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-96j5c" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.388321 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-chqsr"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.408330 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-stm2m"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.409414 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qnwrc"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.410013 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qnwrc" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.410432 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-stm2m" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.414005 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ncctd" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.414281 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-4f4x8" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.414967 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxkdt\" (UniqueName: \"kubernetes.io/projected/97f25c43-f624-4320-b34b-789df5cab5f3-kube-api-access-gxkdt\") pod \"barbican-operator-controller-manager-868647ff47-cj6bl\" (UID: \"97f25c43-f624-4320-b34b-789df5cab5f3\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cj6bl" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.415055 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fflwm\" (UniqueName: \"kubernetes.io/projected/be4fc57a-a006-4068-be4b-5bdeb50f48b4-kube-api-access-fflwm\") pod \"glance-operator-controller-manager-784b5bb6c5-chqsr\" (UID: \"be4fc57a-a006-4068-be4b-5bdeb50f48b4\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-chqsr" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.415136 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdjmn\" (UniqueName: \"kubernetes.io/projected/aba58523-2fad-45af-87ee-a347b586ad4b-kube-api-access-fdjmn\") pod \"designate-operator-controller-manager-6d8bf5c495-cprlh\" (UID: \"aba58523-2fad-45af-87ee-a347b586ad4b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cprlh" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.415166 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2cck\" (UniqueName: \"kubernetes.io/projected/b16ba816-bafa-430e-b18a-5afa27bc0abb-kube-api-access-k2cck\") pod \"cinder-operator-controller-manager-55d77d7b5c-mng89\" (UID: \"b16ba816-bafa-430e-b18a-5afa27bc0abb\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-mng89" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.420717 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-stm2m"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.430045 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-gn242"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.430968 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.436616 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.436874 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-pffds" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.463374 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-gn242"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.468146 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2cck\" (UniqueName: \"kubernetes.io/projected/b16ba816-bafa-430e-b18a-5afa27bc0abb-kube-api-access-k2cck\") pod \"cinder-operator-controller-manager-55d77d7b5c-mng89\" (UID: \"b16ba816-bafa-430e-b18a-5afa27bc0abb\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-mng89" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.469932 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxkdt\" (UniqueName: \"kubernetes.io/projected/97f25c43-f624-4320-b34b-789df5cab5f3-kube-api-access-gxkdt\") pod \"barbican-operator-controller-manager-868647ff47-cj6bl\" (UID: \"97f25c43-f624-4320-b34b-789df5cab5f3\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cj6bl" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.518265 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdjmn\" (UniqueName: \"kubernetes.io/projected/aba58523-2fad-45af-87ee-a347b586ad4b-kube-api-access-fdjmn\") pod \"designate-operator-controller-manager-6d8bf5c495-cprlh\" (UID: \"aba58523-2fad-45af-87ee-a347b586ad4b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cprlh" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.530081 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fflwm\" (UniqueName: \"kubernetes.io/projected/be4fc57a-a006-4068-be4b-5bdeb50f48b4-kube-api-access-fflwm\") pod \"glance-operator-controller-manager-784b5bb6c5-chqsr\" (UID: \"be4fc57a-a006-4068-be4b-5bdeb50f48b4\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-chqsr" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.531014 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qnwrc"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.566674 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-mng89" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.646638 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ffjxk" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.647327 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cj6bl" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.648537 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cprlh" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.665428 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-wwlql"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.672186 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbbtd\" (UniqueName: \"kubernetes.io/projected/0f6c6c75-0fda-41cc-b05f-cfc6e935f82b-kube-api-access-tbbtd\") pod \"horizon-operator-controller-manager-5b9b8895d5-stm2m\" (UID: \"0f6c6c75-0fda-41cc-b05f-cfc6e935f82b\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-stm2m" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.678507 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw6mf\" (UniqueName: \"kubernetes.io/projected/02eb4c80-855b-4590-b09e-d6e6b7919f74-kube-api-access-bw6mf\") pod \"infra-operator-controller-manager-79d975b745-gn242\" (UID: \"02eb4c80-855b-4590-b09e-d6e6b7919f74\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.678760 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9dn8\" (UniqueName: \"kubernetes.io/projected/60e38add-201e-4431-90df-d9c31ba57f39-kube-api-access-h9dn8\") pod \"heat-operator-controller-manager-69f49c598c-qnwrc\" (UID: \"60e38add-201e-4431-90df-d9c31ba57f39\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qnwrc" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.678836 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwlql" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.678862 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert\") pod \"infra-operator-controller-manager-79d975b745-gn242\" (UID: \"02eb4c80-855b-4590-b09e-d6e6b7919f74\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.694887 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fflwm\" (UniqueName: \"kubernetes.io/projected/be4fc57a-a006-4068-be4b-5bdeb50f48b4-kube-api-access-fflwm\") pod \"glance-operator-controller-manager-784b5bb6c5-chqsr\" (UID: \"be4fc57a-a006-4068-be4b-5bdeb50f48b4\") " pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-chqsr" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.694966 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-l5mqh"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.695851 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l5mqh" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.700479 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-dp7bf" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.708817 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-n4sqp" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.728343 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-wwlql"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.769874 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-l5mqh"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.782822 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2k76\" (UniqueName: \"kubernetes.io/projected/0522a131-cf71-4a3e-b60a-fa16371d47d8-kube-api-access-d2k76\") pod \"keystone-operator-controller-manager-b4d948c87-l5mqh\" (UID: \"0522a131-cf71-4a3e-b60a-fa16371d47d8\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l5mqh" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.782935 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9dn8\" (UniqueName: \"kubernetes.io/projected/60e38add-201e-4431-90df-d9c31ba57f39-kube-api-access-h9dn8\") pod \"heat-operator-controller-manager-69f49c598c-qnwrc\" (UID: \"60e38add-201e-4431-90df-d9c31ba57f39\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qnwrc" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.783182 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert\") pod \"infra-operator-controller-manager-79d975b745-gn242\" (UID: \"02eb4c80-855b-4590-b09e-d6e6b7919f74\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.783237 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fpp6\" (UniqueName: \"kubernetes.io/projected/d52cf386-a646-44c0-8394-cdf497e52ebe-kube-api-access-7fpp6\") pod \"ironic-operator-controller-manager-554564d7fc-wwlql\" (UID: \"d52cf386-a646-44c0-8394-cdf497e52ebe\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwlql" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.783292 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbbtd\" (UniqueName: \"kubernetes.io/projected/0f6c6c75-0fda-41cc-b05f-cfc6e935f82b-kube-api-access-tbbtd\") pod \"horizon-operator-controller-manager-5b9b8895d5-stm2m\" (UID: \"0f6c6c75-0fda-41cc-b05f-cfc6e935f82b\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-stm2m" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.783332 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw6mf\" (UniqueName: \"kubernetes.io/projected/02eb4c80-855b-4590-b09e-d6e6b7919f74-kube-api-access-bw6mf\") pod \"infra-operator-controller-manager-79d975b745-gn242\" (UID: \"02eb4c80-855b-4590-b09e-d6e6b7919f74\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:25 crc kubenswrapper[4768]: E0223 18:47:25.783797 4768 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 18:47:25 crc kubenswrapper[4768]: E0223 18:47:25.783869 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert podName:02eb4c80-855b-4590-b09e-d6e6b7919f74 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:26.283852691 +0000 UTC m=+841.674338491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert") pod "infra-operator-controller-manager-79d975b745-gn242" (UID: "02eb4c80-855b-4590-b09e-d6e6b7919f74") : secret "infra-operator-webhook-server-cert" not found Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.806432 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-xm2kv"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.807271 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xm2kv" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.813545 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-7grn5" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.827430 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw6mf\" (UniqueName: \"kubernetes.io/projected/02eb4c80-855b-4590-b09e-d6e6b7919f74-kube-api-access-bw6mf\") pod \"infra-operator-controller-manager-79d975b745-gn242\" (UID: \"02eb4c80-855b-4590-b09e-d6e6b7919f74\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.828719 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-mzwrn"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.829613 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mzwrn" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.837723 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-pq426" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.845091 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9dn8\" (UniqueName: \"kubernetes.io/projected/60e38add-201e-4431-90df-d9c31ba57f39-kube-api-access-h9dn8\") pod \"heat-operator-controller-manager-69f49c598c-qnwrc\" (UID: \"60e38add-201e-4431-90df-d9c31ba57f39\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qnwrc" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.851039 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbbtd\" (UniqueName: \"kubernetes.io/projected/0f6c6c75-0fda-41cc-b05f-cfc6e935f82b-kube-api-access-tbbtd\") pod \"horizon-operator-controller-manager-5b9b8895d5-stm2m\" (UID: \"0f6c6c75-0fda-41cc-b05f-cfc6e935f82b\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-stm2m" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.877470 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-xm2kv"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.884557 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-w5x47"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.885683 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-w5x47" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.888267 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btm8q\" (UniqueName: \"kubernetes.io/projected/0f3afa5e-021e-4226-9734-38d4da145e0a-kube-api-access-btm8q\") pod \"manila-operator-controller-manager-67d996989d-xm2kv\" (UID: \"0f3afa5e-021e-4226-9734-38d4da145e0a\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-xm2kv" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.888314 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6q74\" (UniqueName: \"kubernetes.io/projected/8f3b00ff-a5fc-422c-81fd-e9c0e2a6bf1b-kube-api-access-x6q74\") pod \"mariadb-operator-controller-manager-6994f66f48-mzwrn\" (UID: \"8f3b00ff-a5fc-422c-81fd-e9c0e2a6bf1b\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mzwrn" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.888462 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fpp6\" (UniqueName: \"kubernetes.io/projected/d52cf386-a646-44c0-8394-cdf497e52ebe-kube-api-access-7fpp6\") pod \"ironic-operator-controller-manager-554564d7fc-wwlql\" (UID: \"d52cf386-a646-44c0-8394-cdf497e52ebe\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwlql" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.888831 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2k76\" (UniqueName: \"kubernetes.io/projected/0522a131-cf71-4a3e-b60a-fa16371d47d8-kube-api-access-d2k76\") pod \"keystone-operator-controller-manager-b4d948c87-l5mqh\" (UID: \"0522a131-cf71-4a3e-b60a-fa16371d47d8\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l5mqh" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.899018 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-g2rb5" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.901634 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-w5x47"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.915460 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-mzwrn"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.929574 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.933026 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.936908 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-7vrp5"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.938163 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-7vrp5" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.938941 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-tnzsn" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.940811 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2k76\" (UniqueName: \"kubernetes.io/projected/0522a131-cf71-4a3e-b60a-fa16371d47d8-kube-api-access-d2k76\") pod \"keystone-operator-controller-manager-b4d948c87-l5mqh\" (UID: \"0522a131-cf71-4a3e-b60a-fa16371d47d8\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l5mqh" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.942409 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-sbpxh" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.944648 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.945588 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fpp6\" (UniqueName: \"kubernetes.io/projected/d52cf386-a646-44c0-8394-cdf497e52ebe-kube-api-access-7fpp6\") pod \"ironic-operator-controller-manager-554564d7fc-wwlql\" (UID: \"d52cf386-a646-44c0-8394-cdf497e52ebe\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwlql" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.950053 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-7vrp5"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.957890 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.958734 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.961795 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.962707 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-5gwt8" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.970585 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69"] Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.981893 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-chqsr" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.990227 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7md8\" (UniqueName: \"kubernetes.io/projected/9afd4512-6186-4cb8-a8ba-90628662efba-kube-api-access-f7md8\") pod \"neutron-operator-controller-manager-6bd4687957-w5x47\" (UID: \"9afd4512-6186-4cb8-a8ba-90628662efba\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-w5x47" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.990341 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmdsb\" (UniqueName: \"kubernetes.io/projected/fff6d2ff-130f-45ae-943a-28b8740298c2-kube-api-access-hmdsb\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69\" (UID: \"fff6d2ff-130f-45ae-943a-28b8740298c2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.990448 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btm8q\" (UniqueName: \"kubernetes.io/projected/0f3afa5e-021e-4226-9734-38d4da145e0a-kube-api-access-btm8q\") pod \"manila-operator-controller-manager-67d996989d-xm2kv\" (UID: \"0f3afa5e-021e-4226-9734-38d4da145e0a\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-xm2kv" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.990523 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6q74\" (UniqueName: \"kubernetes.io/projected/8f3b00ff-a5fc-422c-81fd-e9c0e2a6bf1b-kube-api-access-x6q74\") pod \"mariadb-operator-controller-manager-6994f66f48-mzwrn\" (UID: \"8f3b00ff-a5fc-422c-81fd-e9c0e2a6bf1b\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mzwrn" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.990593 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrvfh\" (UniqueName: \"kubernetes.io/projected/cbbc4a69-26c2-4d05-b369-aa142f5a04d2-kube-api-access-qrvfh\") pod \"nova-operator-controller-manager-567668f5cf-pmp8k\" (UID: \"cbbc4a69-26c2-4d05-b369-aa142f5a04d2\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.990690 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn4kw\" (UniqueName: \"kubernetes.io/projected/c7086dd9-9e6f-4207-a037-99369dc6e980-kube-api-access-jn4kw\") pod \"octavia-operator-controller-manager-659dc6bbfc-7vrp5\" (UID: \"c7086dd9-9e6f-4207-a037-99369dc6e980\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-7vrp5" Feb 23 18:47:25 crc kubenswrapper[4768]: I0223 18:47:25.990795 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69\" (UID: \"fff6d2ff-130f-45ae-943a-28b8740298c2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.000898 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.001744 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.005583 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-fh49f" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.022234 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btm8q\" (UniqueName: \"kubernetes.io/projected/0f3afa5e-021e-4226-9734-38d4da145e0a-kube-api-access-btm8q\") pod \"manila-operator-controller-manager-67d996989d-xm2kv\" (UID: \"0f3afa5e-021e-4226-9734-38d4da145e0a\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-xm2kv" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.030047 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6q74\" (UniqueName: \"kubernetes.io/projected/8f3b00ff-a5fc-422c-81fd-e9c0e2a6bf1b-kube-api-access-x6q74\") pod \"mariadb-operator-controller-manager-6994f66f48-mzwrn\" (UID: \"8f3b00ff-a5fc-422c-81fd-e9c0e2a6bf1b\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mzwrn" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.039144 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qnwrc" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.047531 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.048866 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.052885 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-29z6p" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.067899 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-stm2m" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.081760 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.092002 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9stnr\" (UniqueName: \"kubernetes.io/projected/435b416a-a73b-420a-9f48-99be70b4e110-kube-api-access-9stnr\") pod \"ovn-operator-controller-manager-5955d8c787-g9dpw\" (UID: \"435b416a-a73b-420a-9f48-99be70b4e110\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.092109 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrvfh\" (UniqueName: \"kubernetes.io/projected/cbbc4a69-26c2-4d05-b369-aa142f5a04d2-kube-api-access-qrvfh\") pod \"nova-operator-controller-manager-567668f5cf-pmp8k\" (UID: \"cbbc4a69-26c2-4d05-b369-aa142f5a04d2\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.092165 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtdps\" (UniqueName: \"kubernetes.io/projected/ea71893a-6b37-4cc9-b0f5-be711669e8d1-kube-api-access-mtdps\") pod \"placement-operator-controller-manager-8497b45c89-t5qm2\" (UID: \"ea71893a-6b37-4cc9-b0f5-be711669e8d1\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.092199 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn4kw\" (UniqueName: \"kubernetes.io/projected/c7086dd9-9e6f-4207-a037-99369dc6e980-kube-api-access-jn4kw\") pod \"octavia-operator-controller-manager-659dc6bbfc-7vrp5\" (UID: \"c7086dd9-9e6f-4207-a037-99369dc6e980\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-7vrp5" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.092259 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69\" (UID: \"fff6d2ff-130f-45ae-943a-28b8740298c2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.092293 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7md8\" (UniqueName: \"kubernetes.io/projected/9afd4512-6186-4cb8-a8ba-90628662efba-kube-api-access-f7md8\") pod \"neutron-operator-controller-manager-6bd4687957-w5x47\" (UID: \"9afd4512-6186-4cb8-a8ba-90628662efba\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-w5x47" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.092319 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmdsb\" (UniqueName: \"kubernetes.io/projected/fff6d2ff-130f-45ae-943a-28b8740298c2-kube-api-access-hmdsb\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69\" (UID: \"fff6d2ff-130f-45ae-943a-28b8740298c2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:26 crc kubenswrapper[4768]: E0223 18:47:26.096061 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 18:47:26 crc kubenswrapper[4768]: E0223 18:47:26.096128 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert podName:fff6d2ff-130f-45ae-943a-28b8740298c2 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:26.596111014 +0000 UTC m=+841.986596824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" (UID: "fff6d2ff-130f-45ae-943a-28b8740298c2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.119865 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmdsb\" (UniqueName: \"kubernetes.io/projected/fff6d2ff-130f-45ae-943a-28b8740298c2-kube-api-access-hmdsb\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69\" (UID: \"fff6d2ff-130f-45ae-943a-28b8740298c2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.124022 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrvfh\" (UniqueName: \"kubernetes.io/projected/cbbc4a69-26c2-4d05-b369-aa142f5a04d2-kube-api-access-qrvfh\") pod \"nova-operator-controller-manager-567668f5cf-pmp8k\" (UID: \"cbbc4a69-26c2-4d05-b369-aa142f5a04d2\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.124825 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn4kw\" (UniqueName: \"kubernetes.io/projected/c7086dd9-9e6f-4207-a037-99369dc6e980-kube-api-access-jn4kw\") pod \"octavia-operator-controller-manager-659dc6bbfc-7vrp5\" (UID: \"c7086dd9-9e6f-4207-a037-99369dc6e980\") " pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-7vrp5" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.126368 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwlql" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.131404 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-q66cg"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.131753 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7md8\" (UniqueName: \"kubernetes.io/projected/9afd4512-6186-4cb8-a8ba-90628662efba-kube-api-access-f7md8\") pod \"neutron-operator-controller-manager-6bd4687957-w5x47\" (UID: \"9afd4512-6186-4cb8-a8ba-90628662efba\") " pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-w5x47" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.136669 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q66cg" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.139137 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-82vvj" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.170779 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.176400 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-q66cg"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.181355 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.182834 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.185698 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-fd4gc" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.198076 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtdps\" (UniqueName: \"kubernetes.io/projected/ea71893a-6b37-4cc9-b0f5-be711669e8d1-kube-api-access-mtdps\") pod \"placement-operator-controller-manager-8497b45c89-t5qm2\" (UID: \"ea71893a-6b37-4cc9-b0f5-be711669e8d1\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.198517 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.200963 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxcfn\" (UniqueName: \"kubernetes.io/projected/13137d15-ffaa-4127-9885-91e9a6fd6a65-kube-api-access-rxcfn\") pod \"swift-operator-controller-manager-68f46476f-q66cg\" (UID: \"13137d15-ffaa-4127-9885-91e9a6fd6a65\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-q66cg" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.201307 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9stnr\" (UniqueName: \"kubernetes.io/projected/435b416a-a73b-420a-9f48-99be70b4e110-kube-api-access-9stnr\") pod \"ovn-operator-controller-manager-5955d8c787-g9dpw\" (UID: \"435b416a-a73b-420a-9f48-99be70b4e110\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.210200 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l5mqh" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.231208 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9stnr\" (UniqueName: \"kubernetes.io/projected/435b416a-a73b-420a-9f48-99be70b4e110-kube-api-access-9stnr\") pod \"ovn-operator-controller-manager-5955d8c787-g9dpw\" (UID: \"435b416a-a73b-420a-9f48-99be70b4e110\") " pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.240556 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.242806 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.244063 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtdps\" (UniqueName: \"kubernetes.io/projected/ea71893a-6b37-4cc9-b0f5-be711669e8d1-kube-api-access-mtdps\") pod \"placement-operator-controller-manager-8497b45c89-t5qm2\" (UID: \"ea71893a-6b37-4cc9-b0f5-be711669e8d1\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.249297 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-4bnq6" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.260629 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.273547 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xm2kv" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.285165 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.286125 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.292183 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-fbrwg" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.294669 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.295040 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mzwrn" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.312398 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfs9p\" (UniqueName: \"kubernetes.io/projected/034d1fc6-6b51-4e9a-99f9-67038d4c9926-kube-api-access-kfs9p\") pod \"telemetry-operator-controller-manager-589c568786-6wfdk\" (UID: \"034d1fc6-6b51-4e9a-99f9-67038d4c9926\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.312526 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert\") pod \"infra-operator-controller-manager-79d975b745-gn242\" (UID: \"02eb4c80-855b-4590-b09e-d6e6b7919f74\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.312558 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxcfn\" (UniqueName: \"kubernetes.io/projected/13137d15-ffaa-4127-9885-91e9a6fd6a65-kube-api-access-rxcfn\") pod \"swift-operator-controller-manager-68f46476f-q66cg\" (UID: \"13137d15-ffaa-4127-9885-91e9a6fd6a65\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-q66cg" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.312601 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcg58\" (UniqueName: \"kubernetes.io/projected/0b78a9a3-5a2b-435d-8e2f-661eddd91177-kube-api-access-zcg58\") pod \"test-operator-controller-manager-5dc6794d5b-nc28p\" (UID: \"0b78a9a3-5a2b-435d-8e2f-661eddd91177\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p" Feb 23 18:47:26 crc kubenswrapper[4768]: E0223 18:47:26.312790 4768 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 18:47:26 crc kubenswrapper[4768]: E0223 18:47:26.312895 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert podName:02eb4c80-855b-4590-b09e-d6e6b7919f74 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:27.312865036 +0000 UTC m=+842.703350836 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert") pod "infra-operator-controller-manager-79d975b745-gn242" (UID: "02eb4c80-855b-4590-b09e-d6e6b7919f74") : secret "infra-operator-webhook-server-cert" not found Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.317523 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-w5x47" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.326522 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.329770 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.331877 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bflcf" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.333469 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.333688 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.338541 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.340801 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxcfn\" (UniqueName: \"kubernetes.io/projected/13137d15-ffaa-4127-9885-91e9a6fd6a65-kube-api-access-rxcfn\") pod \"swift-operator-controller-manager-68f46476f-q66cg\" (UID: \"13137d15-ffaa-4127-9885-91e9a6fd6a65\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-q66cg" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.368446 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.369463 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.371710 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-s2hrb" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.375519 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.389077 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.392942 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-7vrp5" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.413443 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfs9p\" (UniqueName: \"kubernetes.io/projected/034d1fc6-6b51-4e9a-99f9-67038d4c9926-kube-api-access-kfs9p\") pod \"telemetry-operator-controller-manager-589c568786-6wfdk\" (UID: \"034d1fc6-6b51-4e9a-99f9-67038d4c9926\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.413500 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhn9g\" (UniqueName: \"kubernetes.io/projected/86030533-da46-4579-a1ce-67f3d96c7a90-kube-api-access-xhn9g\") pod \"watcher-operator-controller-manager-bccc79885-gn98t\" (UID: \"86030533-da46-4579-a1ce-67f3d96c7a90\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.413528 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.413592 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcg58\" (UniqueName: \"kubernetes.io/projected/0b78a9a3-5a2b-435d-8e2f-661eddd91177-kube-api-access-zcg58\") pod \"test-operator-controller-manager-5dc6794d5b-nc28p\" (UID: \"0b78a9a3-5a2b-435d-8e2f-661eddd91177\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.413628 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.413658 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlw7s\" (UniqueName: \"kubernetes.io/projected/92c4522a-291f-4c44-8e08-8e4002685f66-kube-api-access-jlw7s\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.413696 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bghbw\" (UniqueName: \"kubernetes.io/projected/d74d7097-0324-4bb7-83c6-fa8cea69c1b4-kube-api-access-bghbw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dhmmp\" (UID: \"d74d7097-0324-4bb7-83c6-fa8cea69c1b4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.437132 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.438133 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcg58\" (UniqueName: \"kubernetes.io/projected/0b78a9a3-5a2b-435d-8e2f-661eddd91177-kube-api-access-zcg58\") pod \"test-operator-controller-manager-5dc6794d5b-nc28p\" (UID: \"0b78a9a3-5a2b-435d-8e2f-661eddd91177\") " pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.443061 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfs9p\" (UniqueName: \"kubernetes.io/projected/034d1fc6-6b51-4e9a-99f9-67038d4c9926-kube-api-access-kfs9p\") pod \"telemetry-operator-controller-manager-589c568786-6wfdk\" (UID: \"034d1fc6-6b51-4e9a-99f9-67038d4c9926\") " pod="openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.460316 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.479185 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q66cg" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.522238 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bghbw\" (UniqueName: \"kubernetes.io/projected/d74d7097-0324-4bb7-83c6-fa8cea69c1b4-kube-api-access-bghbw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dhmmp\" (UID: \"d74d7097-0324-4bb7-83c6-fa8cea69c1b4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.522443 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhn9g\" (UniqueName: \"kubernetes.io/projected/86030533-da46-4579-a1ce-67f3d96c7a90-kube-api-access-xhn9g\") pod \"watcher-operator-controller-manager-bccc79885-gn98t\" (UID: \"86030533-da46-4579-a1ce-67f3d96c7a90\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.522484 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.522580 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.522611 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlw7s\" (UniqueName: \"kubernetes.io/projected/92c4522a-291f-4c44-8e08-8e4002685f66-kube-api-access-jlw7s\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:26 crc kubenswrapper[4768]: E0223 18:47:26.523866 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 18:47:26 crc kubenswrapper[4768]: E0223 18:47:26.523939 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs podName:92c4522a-291f-4c44-8e08-8e4002685f66 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:27.023918111 +0000 UTC m=+842.414403911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs") pod "openstack-operator-controller-manager-7dfcb74874-dxkzr" (UID: "92c4522a-291f-4c44-8e08-8e4002685f66") : secret "webhook-server-cert" not found Feb 23 18:47:26 crc kubenswrapper[4768]: E0223 18:47:26.523993 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 18:47:26 crc kubenswrapper[4768]: E0223 18:47:26.524015 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs podName:92c4522a-291f-4c44-8e08-8e4002685f66 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:27.024008563 +0000 UTC m=+842.414494363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs") pod "openstack-operator-controller-manager-7dfcb74874-dxkzr" (UID: "92c4522a-291f-4c44-8e08-8e4002685f66") : secret "metrics-server-cert" not found Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.544391 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.557967 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhn9g\" (UniqueName: \"kubernetes.io/projected/86030533-da46-4579-a1ce-67f3d96c7a90-kube-api-access-xhn9g\") pod \"watcher-operator-controller-manager-bccc79885-gn98t\" (UID: \"86030533-da46-4579-a1ce-67f3d96c7a90\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.570129 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bghbw\" (UniqueName: \"kubernetes.io/projected/d74d7097-0324-4bb7-83c6-fa8cea69c1b4-kube-api-access-bghbw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dhmmp\" (UID: \"d74d7097-0324-4bb7-83c6-fa8cea69c1b4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.582452 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.585680 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlw7s\" (UniqueName: \"kubernetes.io/projected/92c4522a-291f-4c44-8e08-8e4002685f66-kube-api-access-jlw7s\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.589102 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-cj6bl"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.620661 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.623819 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69\" (UID: \"fff6d2ff-130f-45ae-943a-28b8740298c2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:26 crc kubenswrapper[4768]: E0223 18:47:26.623980 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 18:47:26 crc kubenswrapper[4768]: E0223 18:47:26.624029 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert podName:fff6d2ff-130f-45ae-943a-28b8740298c2 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:27.624015988 +0000 UTC m=+843.014501788 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" (UID: "fff6d2ff-130f-45ae-943a-28b8740298c2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 18:47:26 crc kubenswrapper[4768]: W0223 18:47:26.626352 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97f25c43_f624_4320_b34b_789df5cab5f3.slice/crio-be1d01da8171b5845029fe24b2421fae495e7ca8ebd510af837306f97679878e WatchSource:0}: Error finding container be1d01da8171b5845029fe24b2421fae495e7ca8ebd510af837306f97679878e: Status 404 returned error can't find the container with id be1d01da8171b5845029fe24b2421fae495e7ca8ebd510af837306f97679878e Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.632137 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-cprlh"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.654752 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-mng89"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.663787 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-784b5bb6c5-chqsr"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.670588 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cj6bl" event={"ID":"97f25c43-f624-4320-b34b-789df5cab5f3","Type":"ContainerStarted","Data":"be1d01da8171b5845029fe24b2421fae495e7ca8ebd510af837306f97679878e"} Feb 23 18:47:26 crc kubenswrapper[4768]: W0223 18:47:26.696131 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaba58523_2fad_45af_87ee_a347b586ad4b.slice/crio-0e0c649d58fdfc9e6b326e3fd46e45aa33cadec5312f722d356c473ee6429883 WatchSource:0}: Error finding container 0e0c649d58fdfc9e6b326e3fd46e45aa33cadec5312f722d356c473ee6429883: Status 404 returned error can't find the container with id 0e0c649d58fdfc9e6b326e3fd46e45aa33cadec5312f722d356c473ee6429883 Feb 23 18:47:26 crc kubenswrapper[4768]: W0223 18:47:26.697563 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe4fc57a_a006_4068_be4b_5bdeb50f48b4.slice/crio-e6c0593bd6e4ab8305b94f2b819ce6ccca4ade4c39fee6604c7853a79e30314b WatchSource:0}: Error finding container e6c0593bd6e4ab8305b94f2b819ce6ccca4ade4c39fee6604c7853a79e30314b: Status 404 returned error can't find the container with id e6c0593bd6e4ab8305b94f2b819ce6ccca4ade4c39fee6604c7853a79e30314b Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.703037 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp" Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.780988 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-l5mqh"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.801556 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-qnwrc"] Feb 23 18:47:26 crc kubenswrapper[4768]: I0223 18:47:26.816665 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-stm2m"] Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.041375 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.042007 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.041602 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.042150 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs podName:92c4522a-291f-4c44-8e08-8e4002685f66 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:28.042124083 +0000 UTC m=+843.432609883 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs") pod "openstack-operator-controller-manager-7dfcb74874-dxkzr" (UID: "92c4522a-291f-4c44-8e08-8e4002685f66") : secret "webhook-server-cert" not found Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.042258 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.042343 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs podName:92c4522a-291f-4c44-8e08-8e4002685f66 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:28.042318178 +0000 UTC m=+843.432803968 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs") pod "openstack-operator-controller-manager-7dfcb74874-dxkzr" (UID: "92c4522a-291f-4c44-8e08-8e4002685f66") : secret "metrics-server-cert" not found Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.054054 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-wwlql"] Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.061403 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-659dc6bbfc-7vrp5"] Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.079526 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-mzwrn"] Feb 23 18:47:27 crc kubenswrapper[4768]: W0223 18:47:27.091806 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f3b00ff_a5fc_422c_81fd_e9c0e2a6bf1b.slice/crio-0af771eba07ab0f805c5281936a7a7a3600ab91494a3121b0e552c41278d2aa1 WatchSource:0}: Error finding container 0af771eba07ab0f805c5281936a7a7a3600ab91494a3121b0e552c41278d2aa1: Status 404 returned error can't find the container with id 0af771eba07ab0f805c5281936a7a7a3600ab91494a3121b0e552c41278d2aa1 Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.092375 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-xm2kv"] Feb 23 18:47:27 crc kubenswrapper[4768]: W0223 18:47:27.106899 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9afd4512_6186_4cb8_a8ba_90628662efba.slice/crio-02c9a86c59ade936a9c3ca24878456ae85d0bbf13807ac0690abf6968f2d58aa WatchSource:0}: Error finding container 02c9a86c59ade936a9c3ca24878456ae85d0bbf13807ac0690abf6968f2d58aa: Status 404 returned error can't find the container with id 02c9a86c59ade936a9c3ca24878456ae85d0bbf13807ac0690abf6968f2d58aa Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.114141 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k"] Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.116208 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qrvfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-pmp8k_openstack-operators(cbbc4a69-26c2-4d05-b369-aa142f5a04d2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.117418 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k" podUID="cbbc4a69-26c2-4d05-b369-aa142f5a04d2" Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.127527 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6bd4687957-w5x47"] Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.238873 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-q66cg"] Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.243899 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2"] Feb 23 18:47:27 crc kubenswrapper[4768]: W0223 18:47:27.246831 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13137d15_ffaa_4127_9885_91e9a6fd6a65.slice/crio-1df9d095b857aa7fd9da030000ad3a0b3ef8746f81f00ffbcc141b60a637f374 WatchSource:0}: Error finding container 1df9d095b857aa7fd9da030000ad3a0b3ef8746f81f00ffbcc141b60a637f374: Status 404 returned error can't find the container with id 1df9d095b857aa7fd9da030000ad3a0b3ef8746f81f00ffbcc141b60a637f374 Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.263662 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mtdps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-t5qm2_openstack-operators(ea71893a-6b37-4cc9-b0f5-be711669e8d1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.264869 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2" podUID="ea71893a-6b37-4cc9-b0f5-be711669e8d1" Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.346492 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert\") pod \"infra-operator-controller-manager-79d975b745-gn242\" (UID: \"02eb4c80-855b-4590-b09e-d6e6b7919f74\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.352068 4768 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.352154 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert podName:02eb4c80-855b-4590-b09e-d6e6b7919f74 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:29.352138017 +0000 UTC m=+844.742623817 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert") pod "infra-operator-controller-manager-79d975b745-gn242" (UID: "02eb4c80-855b-4590-b09e-d6e6b7919f74") : secret "infra-operator-webhook-server-cert" not found Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.358025 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp"] Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.368289 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw"] Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.373645 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p"] Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.374146 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bghbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-dhmmp_openstack-operators(d74d7097-0324-4bb7-83c6-fa8cea69c1b4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.375307 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp" podUID="d74d7097-0324-4bb7-83c6-fa8cea69c1b4" Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.378690 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t"] Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.379878 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9stnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5955d8c787-g9dpw_openstack-operators(435b416a-a73b-420a-9f48-99be70b4e110): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.381143 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw" podUID="435b416a-a73b-420a-9f48-99be70b4e110" Feb 23 18:47:27 crc kubenswrapper[4768]: W0223 18:47:27.382807 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86030533_da46_4579_a1ce_67f3d96c7a90.slice/crio-0a1bea292c51b9acc65ab54bb009dd742330b9dc5ae8a33a038e7405d12f7b6a WatchSource:0}: Error finding container 0a1bea292c51b9acc65ab54bb009dd742330b9dc5ae8a33a038e7405d12f7b6a: Status 404 returned error can't find the container with id 0a1bea292c51b9acc65ab54bb009dd742330b9dc5ae8a33a038e7405d12f7b6a Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.384354 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk"] Feb 23 18:47:27 crc kubenswrapper[4768]: W0223 18:47:27.384789 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod034d1fc6_6b51_4e9a_99f9_67038d4c9926.slice/crio-0f0b6923377e1a4d088e36105df631c2bc7ff05c96e7d4b7caf47b8147877986 WatchSource:0}: Error finding container 0f0b6923377e1a4d088e36105df631c2bc7ff05c96e7d4b7caf47b8147877986: Status 404 returned error can't find the container with id 0f0b6923377e1a4d088e36105df631c2bc7ff05c96e7d4b7caf47b8147877986 Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.387501 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kfs9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-589c568786-6wfdk_openstack-operators(034d1fc6-6b51-4e9a-99f9-67038d4c9926): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 18:47:27 crc kubenswrapper[4768]: W0223 18:47:27.388380 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b78a9a3_5a2b_435d_8e2f_661eddd91177.slice/crio-4cf0fb228f18bc61c890483bbdd47d6ebe7f6ef941df54e04cd91746c13d5988 WatchSource:0}: Error finding container 4cf0fb228f18bc61c890483bbdd47d6ebe7f6ef941df54e04cd91746c13d5988: Status 404 returned error can't find the container with id 4cf0fb228f18bc61c890483bbdd47d6ebe7f6ef941df54e04cd91746c13d5988 Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.388753 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk" podUID="034d1fc6-6b51-4e9a-99f9-67038d4c9926" Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.389551 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhn9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-gn98t_openstack-operators(86030533-da46-4579-a1ce-67f3d96c7a90): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.390651 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t" podUID="86030533-da46-4579-a1ce-67f3d96c7a90" Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.392547 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zcg58,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5dc6794d5b-nc28p_openstack-operators(0b78a9a3-5a2b-435d-8e2f-661eddd91177): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.394286 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p" podUID="0b78a9a3-5a2b-435d-8e2f-661eddd91177" Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.658167 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69\" (UID: \"fff6d2ff-130f-45ae-943a-28b8740298c2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.658410 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.658510 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert podName:fff6d2ff-130f-45ae-943a-28b8740298c2 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:29.65849071 +0000 UTC m=+845.048976510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" (UID: "fff6d2ff-130f-45ae-943a-28b8740298c2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.681009 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-w5x47" event={"ID":"9afd4512-6186-4cb8-a8ba-90628662efba","Type":"ContainerStarted","Data":"02c9a86c59ade936a9c3ca24878456ae85d0bbf13807ac0690abf6968f2d58aa"} Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.683941 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t" event={"ID":"86030533-da46-4579-a1ce-67f3d96c7a90","Type":"ContainerStarted","Data":"0a1bea292c51b9acc65ab54bb009dd742330b9dc5ae8a33a038e7405d12f7b6a"} Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.685630 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t" podUID="86030533-da46-4579-a1ce-67f3d96c7a90" Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.689285 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l5mqh" event={"ID":"0522a131-cf71-4a3e-b60a-fa16371d47d8","Type":"ContainerStarted","Data":"e3e624ed661d4b8f7997c02113bcda8777890b4b2dceeab0a4646b9db50f862b"} Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.691307 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mzwrn" event={"ID":"8f3b00ff-a5fc-422c-81fd-e9c0e2a6bf1b","Type":"ContainerStarted","Data":"0af771eba07ab0f805c5281936a7a7a3600ab91494a3121b0e552c41278d2aa1"} Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.694801 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k" event={"ID":"cbbc4a69-26c2-4d05-b369-aa142f5a04d2","Type":"ContainerStarted","Data":"0c9d6561675101917922160e65dfa8ba00aa2bceb190d3c51921d72ef0874f4d"} Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.697140 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-chqsr" event={"ID":"be4fc57a-a006-4068-be4b-5bdeb50f48b4","Type":"ContainerStarted","Data":"e6c0593bd6e4ab8305b94f2b819ce6ccca4ade4c39fee6604c7853a79e30314b"} Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.701495 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k" podUID="cbbc4a69-26c2-4d05-b369-aa142f5a04d2" Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.705877 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xm2kv" event={"ID":"0f3afa5e-021e-4226-9734-38d4da145e0a","Type":"ContainerStarted","Data":"dd97ccb57d040c4a454e3bf4e80a0b65f4948cb1cc3bfcfd8940bb30f512455f"} Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.707885 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-mng89" event={"ID":"b16ba816-bafa-430e-b18a-5afa27bc0abb","Type":"ContainerStarted","Data":"0bdf2b0d897611e63a53e8713ca4dca333341a74047c2982b575b613d288e225"} Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.710098 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qnwrc" event={"ID":"60e38add-201e-4431-90df-d9c31ba57f39","Type":"ContainerStarted","Data":"e853c5337bc065f749873cfb8962cc0063ea3d837bf1e45b5fc78b2e2c15f7d0"} Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.711673 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q66cg" event={"ID":"13137d15-ffaa-4127-9885-91e9a6fd6a65","Type":"ContainerStarted","Data":"1df9d095b857aa7fd9da030000ad3a0b3ef8746f81f00ffbcc141b60a637f374"} Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.723696 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-stm2m" event={"ID":"0f6c6c75-0fda-41cc-b05f-cfc6e935f82b","Type":"ContainerStarted","Data":"a3ce9c6ae22526ebf67ad67534739eeda864d279febc2698cff5fd95d2903345"} Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.725740 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwlql" event={"ID":"d52cf386-a646-44c0-8394-cdf497e52ebe","Type":"ContainerStarted","Data":"c3a5d2c4a4090b5f3625305e93c1252460b655b34b6f3eaf5776eba6dd3439b9"} Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.728558 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cprlh" event={"ID":"aba58523-2fad-45af-87ee-a347b586ad4b","Type":"ContainerStarted","Data":"0e0c649d58fdfc9e6b326e3fd46e45aa33cadec5312f722d356c473ee6429883"} Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.730042 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk" event={"ID":"034d1fc6-6b51-4e9a-99f9-67038d4c9926","Type":"ContainerStarted","Data":"0f0b6923377e1a4d088e36105df631c2bc7ff05c96e7d4b7caf47b8147877986"} Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.734418 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk" podUID="034d1fc6-6b51-4e9a-99f9-67038d4c9926" Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.737610 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p" event={"ID":"0b78a9a3-5a2b-435d-8e2f-661eddd91177","Type":"ContainerStarted","Data":"4cf0fb228f18bc61c890483bbdd47d6ebe7f6ef941df54e04cd91746c13d5988"} Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.739182 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p" podUID="0b78a9a3-5a2b-435d-8e2f-661eddd91177" Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.742832 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2" event={"ID":"ea71893a-6b37-4cc9-b0f5-be711669e8d1","Type":"ContainerStarted","Data":"b2d175f621073e154150c5a1d3e789d639de360c0d07aa36cc41b0ba3ae16b6c"} Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.744569 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2" podUID="ea71893a-6b37-4cc9-b0f5-be711669e8d1" Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.746849 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-7vrp5" event={"ID":"c7086dd9-9e6f-4207-a037-99369dc6e980","Type":"ContainerStarted","Data":"10eedc32d6e3911067287aaf53c71fe2383172586858b1b6eda639a5863b5093"} Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.757795 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp" event={"ID":"d74d7097-0324-4bb7-83c6-fa8cea69c1b4","Type":"ContainerStarted","Data":"e7785b3947a94c36e2fda04b56757248831a6e508647a7ffbcb3645cc0bae067"} Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.763070 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp" podUID="d74d7097-0324-4bb7-83c6-fa8cea69c1b4" Feb 23 18:47:27 crc kubenswrapper[4768]: I0223 18:47:27.766126 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw" event={"ID":"435b416a-a73b-420a-9f48-99be70b4e110","Type":"ContainerStarted","Data":"5534fc532569a1d870316c49727588a78a1768c61f816f1f4f79979aa38d16e7"} Feb 23 18:47:27 crc kubenswrapper[4768]: E0223 18:47:27.767858 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw" podUID="435b416a-a73b-420a-9f48-99be70b4e110" Feb 23 18:47:28 crc kubenswrapper[4768]: I0223 18:47:28.065513 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:28 crc kubenswrapper[4768]: I0223 18:47:28.065657 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:28 crc kubenswrapper[4768]: E0223 18:47:28.066346 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 18:47:28 crc kubenswrapper[4768]: E0223 18:47:28.066457 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs podName:92c4522a-291f-4c44-8e08-8e4002685f66 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:30.066438554 +0000 UTC m=+845.456924344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs") pod "openstack-operator-controller-manager-7dfcb74874-dxkzr" (UID: "92c4522a-291f-4c44-8e08-8e4002685f66") : secret "webhook-server-cert" not found Feb 23 18:47:28 crc kubenswrapper[4768]: E0223 18:47:28.066486 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 18:47:28 crc kubenswrapper[4768]: E0223 18:47:28.066548 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs podName:92c4522a-291f-4c44-8e08-8e4002685f66 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:30.066524086 +0000 UTC m=+845.457009966 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs") pod "openstack-operator-controller-manager-7dfcb74874-dxkzr" (UID: "92c4522a-291f-4c44-8e08-8e4002685f66") : secret "metrics-server-cert" not found Feb 23 18:47:28 crc kubenswrapper[4768]: E0223 18:47:28.780565 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t" podUID="86030533-da46-4579-a1ce-67f3d96c7a90" Feb 23 18:47:28 crc kubenswrapper[4768]: E0223 18:47:28.780943 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:4eb8fab5530a08915d3ab3e11e2808aeae16c8a220ed34ee04a186b2ae2303dc\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk" podUID="034d1fc6-6b51-4e9a-99f9-67038d4c9926" Feb 23 18:47:28 crc kubenswrapper[4768]: E0223 18:47:28.781404 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2" podUID="ea71893a-6b37-4cc9-b0f5-be711669e8d1" Feb 23 18:47:28 crc kubenswrapper[4768]: E0223 18:47:28.781481 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp" podUID="d74d7097-0324-4bb7-83c6-fa8cea69c1b4" Feb 23 18:47:28 crc kubenswrapper[4768]: E0223 18:47:28.781561 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:f4143497c70c048a7733c284060347a0c74ef4e628aca22ee191e5bc9e4c7192\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw" podUID="435b416a-a73b-420a-9f48-99be70b4e110" Feb 23 18:47:28 crc kubenswrapper[4768]: E0223 18:47:28.782514 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:38e6a5bd24ab1684f22a64186fe99a7cdc7897eb7feb715ec1704eea7596dd98\\\"\"" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p" podUID="0b78a9a3-5a2b-435d-8e2f-661eddd91177" Feb 23 18:47:28 crc kubenswrapper[4768]: E0223 18:47:28.786653 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k" podUID="cbbc4a69-26c2-4d05-b369-aa142f5a04d2" Feb 23 18:47:29 crc kubenswrapper[4768]: I0223 18:47:29.390957 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert\") pod \"infra-operator-controller-manager-79d975b745-gn242\" (UID: \"02eb4c80-855b-4590-b09e-d6e6b7919f74\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:29 crc kubenswrapper[4768]: E0223 18:47:29.391182 4768 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 18:47:29 crc kubenswrapper[4768]: E0223 18:47:29.391289 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert podName:02eb4c80-855b-4590-b09e-d6e6b7919f74 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:33.391268378 +0000 UTC m=+848.781754178 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert") pod "infra-operator-controller-manager-79d975b745-gn242" (UID: "02eb4c80-855b-4590-b09e-d6e6b7919f74") : secret "infra-operator-webhook-server-cert" not found Feb 23 18:47:29 crc kubenswrapper[4768]: I0223 18:47:29.696063 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69\" (UID: \"fff6d2ff-130f-45ae-943a-28b8740298c2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:29 crc kubenswrapper[4768]: E0223 18:47:29.696739 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 18:47:29 crc kubenswrapper[4768]: E0223 18:47:29.696894 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert podName:fff6d2ff-130f-45ae-943a-28b8740298c2 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:33.696862801 +0000 UTC m=+849.087348641 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" (UID: "fff6d2ff-130f-45ae-943a-28b8740298c2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 18:47:30 crc kubenswrapper[4768]: I0223 18:47:30.115241 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:30 crc kubenswrapper[4768]: I0223 18:47:30.115402 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:30 crc kubenswrapper[4768]: E0223 18:47:30.115566 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 18:47:30 crc kubenswrapper[4768]: E0223 18:47:30.115625 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs podName:92c4522a-291f-4c44-8e08-8e4002685f66 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:34.11560646 +0000 UTC m=+849.506092270 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs") pod "openstack-operator-controller-manager-7dfcb74874-dxkzr" (UID: "92c4522a-291f-4c44-8e08-8e4002685f66") : secret "webhook-server-cert" not found Feb 23 18:47:30 crc kubenswrapper[4768]: E0223 18:47:30.116005 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 18:47:30 crc kubenswrapper[4768]: E0223 18:47:30.116035 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs podName:92c4522a-291f-4c44-8e08-8e4002685f66 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:34.116026331 +0000 UTC m=+849.506512141 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs") pod "openstack-operator-controller-manager-7dfcb74874-dxkzr" (UID: "92c4522a-291f-4c44-8e08-8e4002685f66") : secret "metrics-server-cert" not found Feb 23 18:47:33 crc kubenswrapper[4768]: I0223 18:47:33.471084 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert\") pod \"infra-operator-controller-manager-79d975b745-gn242\" (UID: \"02eb4c80-855b-4590-b09e-d6e6b7919f74\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:33 crc kubenswrapper[4768]: E0223 18:47:33.471274 4768 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 18:47:33 crc kubenswrapper[4768]: E0223 18:47:33.471636 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert podName:02eb4c80-855b-4590-b09e-d6e6b7919f74 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:41.471616317 +0000 UTC m=+856.862102117 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert") pod "infra-operator-controller-manager-79d975b745-gn242" (UID: "02eb4c80-855b-4590-b09e-d6e6b7919f74") : secret "infra-operator-webhook-server-cert" not found Feb 23 18:47:33 crc kubenswrapper[4768]: I0223 18:47:33.775974 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69\" (UID: \"fff6d2ff-130f-45ae-943a-28b8740298c2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:33 crc kubenswrapper[4768]: E0223 18:47:33.776122 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 18:47:33 crc kubenswrapper[4768]: E0223 18:47:33.776172 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert podName:fff6d2ff-130f-45ae-943a-28b8740298c2 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:41.77615596 +0000 UTC m=+857.166641760 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" (UID: "fff6d2ff-130f-45ae-943a-28b8740298c2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 18:47:34 crc kubenswrapper[4768]: I0223 18:47:34.182743 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:34 crc kubenswrapper[4768]: E0223 18:47:34.183018 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 18:47:34 crc kubenswrapper[4768]: I0223 18:47:34.183045 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:34 crc kubenswrapper[4768]: E0223 18:47:34.183243 4768 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 23 18:47:34 crc kubenswrapper[4768]: E0223 18:47:34.183281 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs podName:92c4522a-291f-4c44-8e08-8e4002685f66 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:42.183175939 +0000 UTC m=+857.573661779 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs") pod "openstack-operator-controller-manager-7dfcb74874-dxkzr" (UID: "92c4522a-291f-4c44-8e08-8e4002685f66") : secret "metrics-server-cert" not found Feb 23 18:47:34 crc kubenswrapper[4768]: E0223 18:47:34.183412 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs podName:92c4522a-291f-4c44-8e08-8e4002685f66 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:42.183383415 +0000 UTC m=+857.573869255 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs") pod "openstack-operator-controller-manager-7dfcb74874-dxkzr" (UID: "92c4522a-291f-4c44-8e08-8e4002685f66") : secret "webhook-server-cert" not found Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.872139 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l5mqh" event={"ID":"0522a131-cf71-4a3e-b60a-fa16371d47d8","Type":"ContainerStarted","Data":"feea7b2f26174a082399bad01b726491f0800f1a62c7f7773faacbc83b1d0011"} Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.873412 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l5mqh" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.880154 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-stm2m" event={"ID":"0f6c6c75-0fda-41cc-b05f-cfc6e935f82b","Type":"ContainerStarted","Data":"387e2c631fa92cdb59bbb3b3d3fde10b119b4a186381ba832822d94c3bdf959b"} Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.880311 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-stm2m" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.883709 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwlql" event={"ID":"d52cf386-a646-44c0-8394-cdf497e52ebe","Type":"ContainerStarted","Data":"41b2f37b6f46d88357ed9e63e7e2e71b1d190715e09f35f78152c3abc28e2bf1"} Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.884525 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwlql" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.889724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-mng89" event={"ID":"b16ba816-bafa-430e-b18a-5afa27bc0abb","Type":"ContainerStarted","Data":"49e1f353cf9d9f5006d0a049910b17bcfaef21a741c4fd3072c44fef4c639a4f"} Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.890072 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-mng89" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.894130 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l5mqh" podStartSLOduration=3.088736659 podStartE2EDuration="14.894115099s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:26.830200819 +0000 UTC m=+842.220686619" lastFinishedPulling="2026-02-23 18:47:38.635579229 +0000 UTC m=+854.026065059" observedRunningTime="2026-02-23 18:47:39.890522221 +0000 UTC m=+855.281008021" watchObservedRunningTime="2026-02-23 18:47:39.894115099 +0000 UTC m=+855.284600899" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.894225 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cprlh" event={"ID":"aba58523-2fad-45af-87ee-a347b586ad4b","Type":"ContainerStarted","Data":"457b79f00540f8a51f96e96af7b8ece104c729f1dae32a3f0e0f79b9600bc3b8"} Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.894497 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cprlh" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.904060 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-7vrp5" event={"ID":"c7086dd9-9e6f-4207-a037-99369dc6e980","Type":"ContainerStarted","Data":"df39b6326930f5fd525949e387881c71444d59d7798da703b06e349aae757e2e"} Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.904435 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-7vrp5" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.909400 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qnwrc" event={"ID":"60e38add-201e-4431-90df-d9c31ba57f39","Type":"ContainerStarted","Data":"fdd52323f57044cc581ac984d82f361822e9093f967b761c21a887f0c0746d60"} Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.909968 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qnwrc" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.911159 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q66cg" event={"ID":"13137d15-ffaa-4127-9885-91e9a6fd6a65","Type":"ContainerStarted","Data":"5c2f24ee075922ef5d7b385f5da91c9a82399665c1cae112d220477a7e1dd105"} Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.911436 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q66cg" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.912928 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mzwrn" event={"ID":"8f3b00ff-a5fc-422c-81fd-e9c0e2a6bf1b","Type":"ContainerStarted","Data":"a52bc684c54b0ead0743a04d21f3cf29a9c945c73a2376b23a1eff4df2ef01f4"} Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.912979 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mzwrn" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.916388 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-chqsr" event={"ID":"be4fc57a-a006-4068-be4b-5bdeb50f48b4","Type":"ContainerStarted","Data":"20008bdfb18c9b3b56a6b9ffdc498fcca210abdd19ecc81d4d0f9716a0d181bb"} Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.916422 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-chqsr" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.918878 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xm2kv" event={"ID":"0f3afa5e-021e-4226-9734-38d4da145e0a","Type":"ContainerStarted","Data":"e98227b95e630b7b46947bf4247bb0b271df16c5c1d515808d6b5c5e24bac9fb"} Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.919053 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xm2kv" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.919506 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-mng89" podStartSLOduration=3.058119768 podStartE2EDuration="14.919495323s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:26.686286989 +0000 UTC m=+842.076772789" lastFinishedPulling="2026-02-23 18:47:38.547662534 +0000 UTC m=+853.938148344" observedRunningTime="2026-02-23 18:47:39.916792919 +0000 UTC m=+855.307278719" watchObservedRunningTime="2026-02-23 18:47:39.919495323 +0000 UTC m=+855.309981123" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.927401 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-w5x47" event={"ID":"9afd4512-6186-4cb8-a8ba-90628662efba","Type":"ContainerStarted","Data":"1e442f3083633e4b7bb2799a54d15b9bfd4e5aac79022ee73fdedecd3b762ab2"} Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.927871 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-w5x47" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.929499 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cj6bl" event={"ID":"97f25c43-f624-4320-b34b-789df5cab5f3","Type":"ContainerStarted","Data":"932fb39c30119fb739866e741b3b5117e951384ac0a591da7fdbdcc6401b6bfc"} Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.929904 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cj6bl" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.939222 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwlql" podStartSLOduration=3.426055176 podStartE2EDuration="14.939209923s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:27.065182815 +0000 UTC m=+842.455668615" lastFinishedPulling="2026-02-23 18:47:38.578337562 +0000 UTC m=+853.968823362" observedRunningTime="2026-02-23 18:47:39.93286461 +0000 UTC m=+855.323350410" watchObservedRunningTime="2026-02-23 18:47:39.939209923 +0000 UTC m=+855.329695723" Feb 23 18:47:39 crc kubenswrapper[4768]: I0223 18:47:39.955095 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-stm2m" podStartSLOduration=3.29727604 podStartE2EDuration="14.955077657s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:26.88741285 +0000 UTC m=+842.277898650" lastFinishedPulling="2026-02-23 18:47:38.545214467 +0000 UTC m=+853.935700267" observedRunningTime="2026-02-23 18:47:39.949142835 +0000 UTC m=+855.339628635" watchObservedRunningTime="2026-02-23 18:47:39.955077657 +0000 UTC m=+855.345563447" Feb 23 18:47:40 crc kubenswrapper[4768]: I0223 18:47:40.013714 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qnwrc" podStartSLOduration=3.299217897 podStartE2EDuration="15.013700891s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:26.828649165 +0000 UTC m=+842.219134965" lastFinishedPulling="2026-02-23 18:47:38.543132159 +0000 UTC m=+853.933617959" observedRunningTime="2026-02-23 18:47:40.009092695 +0000 UTC m=+855.399578495" watchObservedRunningTime="2026-02-23 18:47:40.013700891 +0000 UTC m=+855.404186691" Feb 23 18:47:40 crc kubenswrapper[4768]: I0223 18:47:40.027418 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mzwrn" podStartSLOduration=3.585777727 podStartE2EDuration="15.027404917s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:27.107469422 +0000 UTC m=+842.497955222" lastFinishedPulling="2026-02-23 18:47:38.549096612 +0000 UTC m=+853.939582412" observedRunningTime="2026-02-23 18:47:40.026396899 +0000 UTC m=+855.416882699" watchObservedRunningTime="2026-02-23 18:47:40.027404917 +0000 UTC m=+855.417890717" Feb 23 18:47:40 crc kubenswrapper[4768]: I0223 18:47:40.053284 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cj6bl" podStartSLOduration=3.153802006 podStartE2EDuration="15.053268924s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:26.643695542 +0000 UTC m=+842.034181342" lastFinishedPulling="2026-02-23 18:47:38.54316246 +0000 UTC m=+853.933648260" observedRunningTime="2026-02-23 18:47:40.04909613 +0000 UTC m=+855.439581920" watchObservedRunningTime="2026-02-23 18:47:40.053268924 +0000 UTC m=+855.443754724" Feb 23 18:47:40 crc kubenswrapper[4768]: I0223 18:47:40.082865 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xm2kv" podStartSLOduration=3.646970881 podStartE2EDuration="15.082852084s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:27.10739564 +0000 UTC m=+842.497881430" lastFinishedPulling="2026-02-23 18:47:38.543276813 +0000 UTC m=+853.933762633" observedRunningTime="2026-02-23 18:47:40.067413951 +0000 UTC m=+855.457899751" watchObservedRunningTime="2026-02-23 18:47:40.082852084 +0000 UTC m=+855.473337884" Feb 23 18:47:40 crc kubenswrapper[4768]: I0223 18:47:40.111393 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-w5x47" podStartSLOduration=3.673011374 podStartE2EDuration="15.111379585s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:27.111650656 +0000 UTC m=+842.502136456" lastFinishedPulling="2026-02-23 18:47:38.550018867 +0000 UTC m=+853.940504667" observedRunningTime="2026-02-23 18:47:40.086533055 +0000 UTC m=+855.477018855" watchObservedRunningTime="2026-02-23 18:47:40.111379585 +0000 UTC m=+855.501865385" Feb 23 18:47:40 crc kubenswrapper[4768]: I0223 18:47:40.114152 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-7vrp5" podStartSLOduration=3.594884336 podStartE2EDuration="15.114146151s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:27.075356753 +0000 UTC m=+842.465842543" lastFinishedPulling="2026-02-23 18:47:38.594618528 +0000 UTC m=+853.985104358" observedRunningTime="2026-02-23 18:47:40.109875643 +0000 UTC m=+855.500361443" watchObservedRunningTime="2026-02-23 18:47:40.114146151 +0000 UTC m=+855.504631951" Feb 23 18:47:40 crc kubenswrapper[4768]: I0223 18:47:40.134103 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q66cg" podStartSLOduration=3.793663895 podStartE2EDuration="15.134081246s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:27.250056174 +0000 UTC m=+842.640541974" lastFinishedPulling="2026-02-23 18:47:38.590473495 +0000 UTC m=+853.980959325" observedRunningTime="2026-02-23 18:47:40.129694185 +0000 UTC m=+855.520179985" watchObservedRunningTime="2026-02-23 18:47:40.134081246 +0000 UTC m=+855.524567046" Feb 23 18:47:40 crc kubenswrapper[4768]: I0223 18:47:40.156162 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-chqsr" podStartSLOduration=3.326384279 podStartE2EDuration="15.15614668s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:26.713276566 +0000 UTC m=+842.103762366" lastFinishedPulling="2026-02-23 18:47:38.543038967 +0000 UTC m=+853.933524767" observedRunningTime="2026-02-23 18:47:40.148887721 +0000 UTC m=+855.539373521" watchObservedRunningTime="2026-02-23 18:47:40.15614668 +0000 UTC m=+855.546632480" Feb 23 18:47:41 crc kubenswrapper[4768]: I0223 18:47:41.525718 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert\") pod \"infra-operator-controller-manager-79d975b745-gn242\" (UID: \"02eb4c80-855b-4590-b09e-d6e6b7919f74\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:41 crc kubenswrapper[4768]: E0223 18:47:41.525998 4768 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 23 18:47:41 crc kubenswrapper[4768]: E0223 18:47:41.526268 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert podName:02eb4c80-855b-4590-b09e-d6e6b7919f74 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:57.526229422 +0000 UTC m=+872.916715222 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert") pod "infra-operator-controller-manager-79d975b745-gn242" (UID: "02eb4c80-855b-4590-b09e-d6e6b7919f74") : secret "infra-operator-webhook-server-cert" not found Feb 23 18:47:41 crc kubenswrapper[4768]: I0223 18:47:41.829802 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69\" (UID: \"fff6d2ff-130f-45ae-943a-28b8740298c2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:41 crc kubenswrapper[4768]: E0223 18:47:41.830265 4768 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 18:47:41 crc kubenswrapper[4768]: E0223 18:47:41.830310 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert podName:fff6d2ff-130f-45ae-943a-28b8740298c2 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:57.830295933 +0000 UTC m=+873.220781733 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" (UID: "fff6d2ff-130f-45ae-943a-28b8740298c2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 23 18:47:42 crc kubenswrapper[4768]: I0223 18:47:42.236111 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:42 crc kubenswrapper[4768]: I0223 18:47:42.236213 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:42 crc kubenswrapper[4768]: E0223 18:47:42.236322 4768 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 23 18:47:42 crc kubenswrapper[4768]: E0223 18:47:42.236396 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs podName:92c4522a-291f-4c44-8e08-8e4002685f66 nodeName:}" failed. No retries permitted until 2026-02-23 18:47:58.236378175 +0000 UTC m=+873.626863985 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs") pod "openstack-operator-controller-manager-7dfcb74874-dxkzr" (UID: "92c4522a-291f-4c44-8e08-8e4002685f66") : secret "metrics-server-cert" not found Feb 23 18:47:42 crc kubenswrapper[4768]: I0223 18:47:42.243229 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-webhook-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:45 crc kubenswrapper[4768]: I0223 18:47:45.585558 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-mng89" Feb 23 18:47:45 crc kubenswrapper[4768]: I0223 18:47:45.605814 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cprlh" podStartSLOduration=8.770718114 podStartE2EDuration="20.60579418s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:26.713863292 +0000 UTC m=+842.104349092" lastFinishedPulling="2026-02-23 18:47:38.548939348 +0000 UTC m=+853.939425158" observedRunningTime="2026-02-23 18:47:40.166657297 +0000 UTC m=+855.557143097" watchObservedRunningTime="2026-02-23 18:47:45.60579418 +0000 UTC m=+860.996279980" Feb 23 18:47:45 crc kubenswrapper[4768]: I0223 18:47:45.657185 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-cj6bl" Feb 23 18:47:45 crc kubenswrapper[4768]: I0223 18:47:45.661219 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-cprlh" Feb 23 18:47:45 crc kubenswrapper[4768]: I0223 18:47:45.985184 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-784b5bb6c5-chqsr" Feb 23 18:47:46 crc kubenswrapper[4768]: I0223 18:47:46.042143 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-qnwrc" Feb 23 18:47:46 crc kubenswrapper[4768]: I0223 18:47:46.081197 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-stm2m" Feb 23 18:47:46 crc kubenswrapper[4768]: I0223 18:47:46.130573 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-wwlql" Feb 23 18:47:46 crc kubenswrapper[4768]: I0223 18:47:46.231184 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-l5mqh" Feb 23 18:47:46 crc kubenswrapper[4768]: I0223 18:47:46.277390 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-xm2kv" Feb 23 18:47:46 crc kubenswrapper[4768]: I0223 18:47:46.310660 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-mzwrn" Feb 23 18:47:46 crc kubenswrapper[4768]: I0223 18:47:46.320226 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6bd4687957-w5x47" Feb 23 18:47:46 crc kubenswrapper[4768]: I0223 18:47:46.395979 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-659dc6bbfc-7vrp5" Feb 23 18:47:46 crc kubenswrapper[4768]: I0223 18:47:46.486240 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q66cg" Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.018738 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk" event={"ID":"034d1fc6-6b51-4e9a-99f9-67038d4c9926","Type":"ContainerStarted","Data":"eb660c0174a93a9c20946fd50fb1412a3fe7c363b37c9eddb0bf731f62055b2d"} Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.019921 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk" Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.025970 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k" event={"ID":"cbbc4a69-26c2-4d05-b369-aa142f5a04d2","Type":"ContainerStarted","Data":"ab95df238a53fb986053d52b30cbc2937cfc2b1c2b2998040407748e02f710c1"} Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.026522 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k" Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.028552 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p" event={"ID":"0b78a9a3-5a2b-435d-8e2f-661eddd91177","Type":"ContainerStarted","Data":"5729dfb79f3526aa35d6b4957d835538b1ad44088252aeed152570ec69105d75"} Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.028933 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p" Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.030463 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2" event={"ID":"ea71893a-6b37-4cc9-b0f5-be711669e8d1","Type":"ContainerStarted","Data":"4609a56a8e1754ff49cc0b726ce34f0a089903be6542ae9e184342b962955919"} Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.030799 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2" Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.032035 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp" event={"ID":"d74d7097-0324-4bb7-83c6-fa8cea69c1b4","Type":"ContainerStarted","Data":"913bf486a08689043578cc6082581a4948545b048fef0940620c03f701aaa9af"} Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.034419 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t" event={"ID":"86030533-da46-4579-a1ce-67f3d96c7a90","Type":"ContainerStarted","Data":"ff9901f40fcbe7937411b9310168548d3ecd29781e5b3ea0c0ac2514b9c99e87"} Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.034781 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t" Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.035804 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw" event={"ID":"435b416a-a73b-420a-9f48-99be70b4e110","Type":"ContainerStarted","Data":"9a02cee1f76d98183ea3944659ddcc27b6c0c12d0c808a75d4222beeb8d3ecce"} Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.036150 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw" Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.047911 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk" podStartSLOduration=5.855091237 podStartE2EDuration="24.047895224s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:27.387335481 +0000 UTC m=+842.777821281" lastFinishedPulling="2026-02-23 18:47:45.580139438 +0000 UTC m=+860.970625268" observedRunningTime="2026-02-23 18:47:49.045502038 +0000 UTC m=+864.435987838" watchObservedRunningTime="2026-02-23 18:47:49.047895224 +0000 UTC m=+864.438381024" Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.060213 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2" podStartSLOduration=2.738820219 podStartE2EDuration="24.06019764s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:27.26344943 +0000 UTC m=+842.653935230" lastFinishedPulling="2026-02-23 18:47:48.584826821 +0000 UTC m=+863.975312651" observedRunningTime="2026-02-23 18:47:49.058759261 +0000 UTC m=+864.449245061" watchObservedRunningTime="2026-02-23 18:47:49.06019764 +0000 UTC m=+864.450683440" Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.090807 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k" podStartSLOduration=2.600962306 podStartE2EDuration="24.090791127s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:27.115982005 +0000 UTC m=+842.506467805" lastFinishedPulling="2026-02-23 18:47:48.605810836 +0000 UTC m=+863.996296626" observedRunningTime="2026-02-23 18:47:49.075133758 +0000 UTC m=+864.465619558" watchObservedRunningTime="2026-02-23 18:47:49.090791127 +0000 UTC m=+864.481276927" Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.093094 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p" podStartSLOduration=2.966315033 podStartE2EDuration="24.09308805s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:27.392388208 +0000 UTC m=+842.782874008" lastFinishedPulling="2026-02-23 18:47:48.519161185 +0000 UTC m=+863.909647025" observedRunningTime="2026-02-23 18:47:49.089017518 +0000 UTC m=+864.479503318" watchObservedRunningTime="2026-02-23 18:47:49.09308805 +0000 UTC m=+864.483573850" Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.105816 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dhmmp" podStartSLOduration=1.9085499129999999 podStartE2EDuration="23.105801928s" podCreationTimestamp="2026-02-23 18:47:26 +0000 UTC" firstStartedPulling="2026-02-23 18:47:27.373943974 +0000 UTC m=+842.764429774" lastFinishedPulling="2026-02-23 18:47:48.571195989 +0000 UTC m=+863.961681789" observedRunningTime="2026-02-23 18:47:49.104921074 +0000 UTC m=+864.495406874" watchObservedRunningTime="2026-02-23 18:47:49.105801928 +0000 UTC m=+864.496287728" Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.125358 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw" podStartSLOduration=6.340857189 podStartE2EDuration="24.125342132s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:27.379746073 +0000 UTC m=+842.770231873" lastFinishedPulling="2026-02-23 18:47:45.164231016 +0000 UTC m=+860.554716816" observedRunningTime="2026-02-23 18:47:49.121239641 +0000 UTC m=+864.511725441" watchObservedRunningTime="2026-02-23 18:47:49.125342132 +0000 UTC m=+864.515827932" Feb 23 18:47:49 crc kubenswrapper[4768]: I0223 18:47:49.154371 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t" podStartSLOduration=2.947467417 podStartE2EDuration="24.154355367s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:27.389441457 +0000 UTC m=+842.779927257" lastFinishedPulling="2026-02-23 18:47:48.596329407 +0000 UTC m=+863.986815207" observedRunningTime="2026-02-23 18:47:49.151064367 +0000 UTC m=+864.541550167" watchObservedRunningTime="2026-02-23 18:47:49.154355367 +0000 UTC m=+864.544841167" Feb 23 18:47:56 crc kubenswrapper[4768]: I0223 18:47:56.381236 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-pmp8k" Feb 23 18:47:56 crc kubenswrapper[4768]: I0223 18:47:56.442127 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-t5qm2" Feb 23 18:47:56 crc kubenswrapper[4768]: I0223 18:47:56.467351 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5955d8c787-g9dpw" Feb 23 18:47:56 crc kubenswrapper[4768]: I0223 18:47:56.555225 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-589c568786-6wfdk" Feb 23 18:47:56 crc kubenswrapper[4768]: I0223 18:47:56.590507 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5dc6794d5b-nc28p" Feb 23 18:47:56 crc kubenswrapper[4768]: I0223 18:47:56.626970 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-gn98t" Feb 23 18:47:57 crc kubenswrapper[4768]: I0223 18:47:57.579897 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert\") pod \"infra-operator-controller-manager-79d975b745-gn242\" (UID: \"02eb4c80-855b-4590-b09e-d6e6b7919f74\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:57 crc kubenswrapper[4768]: I0223 18:47:57.591679 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/02eb4c80-855b-4590-b09e-d6e6b7919f74-cert\") pod \"infra-operator-controller-manager-79d975b745-gn242\" (UID: \"02eb4c80-855b-4590-b09e-d6e6b7919f74\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:57 crc kubenswrapper[4768]: I0223 18:47:57.780829 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:47:57 crc kubenswrapper[4768]: I0223 18:47:57.894641 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69\" (UID: \"fff6d2ff-130f-45ae-943a-28b8740298c2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:57 crc kubenswrapper[4768]: I0223 18:47:57.901173 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fff6d2ff-130f-45ae-943a-28b8740298c2-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69\" (UID: \"fff6d2ff-130f-45ae-943a-28b8740298c2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:57 crc kubenswrapper[4768]: I0223 18:47:57.911169 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:47:58 crc kubenswrapper[4768]: I0223 18:47:58.268946 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-gn242"] Feb 23 18:47:58 crc kubenswrapper[4768]: I0223 18:47:58.305484 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:58 crc kubenswrapper[4768]: I0223 18:47:58.315817 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/92c4522a-291f-4c44-8e08-8e4002685f66-metrics-certs\") pod \"openstack-operator-controller-manager-7dfcb74874-dxkzr\" (UID: \"92c4522a-291f-4c44-8e08-8e4002685f66\") " pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:58 crc kubenswrapper[4768]: I0223 18:47:58.359675 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69"] Feb 23 18:47:58 crc kubenswrapper[4768]: I0223 18:47:58.467578 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:47:58 crc kubenswrapper[4768]: I0223 18:47:58.783783 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr"] Feb 23 18:47:58 crc kubenswrapper[4768]: W0223 18:47:58.787371 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92c4522a_291f_4c44_8e08_8e4002685f66.slice/crio-2671272cc35775cf472a9a582dc712f8f10b120fbb57789df26e6291a526101e WatchSource:0}: Error finding container 2671272cc35775cf472a9a582dc712f8f10b120fbb57789df26e6291a526101e: Status 404 returned error can't find the container with id 2671272cc35775cf472a9a582dc712f8f10b120fbb57789df26e6291a526101e Feb 23 18:47:59 crc kubenswrapper[4768]: I0223 18:47:59.115851 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" event={"ID":"92c4522a-291f-4c44-8e08-8e4002685f66","Type":"ContainerStarted","Data":"2671272cc35775cf472a9a582dc712f8f10b120fbb57789df26e6291a526101e"} Feb 23 18:47:59 crc kubenswrapper[4768]: I0223 18:47:59.117315 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" event={"ID":"02eb4c80-855b-4590-b09e-d6e6b7919f74","Type":"ContainerStarted","Data":"e070424a345a0189cd80fb9e92bb15d78e2d773a5248fd78364f61009fc8afd9"} Feb 23 18:47:59 crc kubenswrapper[4768]: I0223 18:47:59.118995 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" event={"ID":"fff6d2ff-130f-45ae-943a-28b8740298c2","Type":"ContainerStarted","Data":"6eb399407680269046c01ea774e14e51b7b543b80af944ef0932de654568b8bb"} Feb 23 18:48:00 crc kubenswrapper[4768]: I0223 18:48:00.128862 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" event={"ID":"92c4522a-291f-4c44-8e08-8e4002685f66","Type":"ContainerStarted","Data":"c3d8285f97002ef3984deaf99d84d2beecd312f5209ef33a1952fb6abc9def9d"} Feb 23 18:48:00 crc kubenswrapper[4768]: I0223 18:48:00.129591 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:48:00 crc kubenswrapper[4768]: I0223 18:48:00.165553 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" podStartSLOduration=35.165523828 podStartE2EDuration="35.165523828s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:48:00.154379753 +0000 UTC m=+875.544865553" watchObservedRunningTime="2026-02-23 18:48:00.165523828 +0000 UTC m=+875.556009658" Feb 23 18:48:03 crc kubenswrapper[4768]: I0223 18:48:03.154303 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" event={"ID":"fff6d2ff-130f-45ae-943a-28b8740298c2","Type":"ContainerStarted","Data":"3a3fd18001fbf1dc0b4ab2de4f6e8c40b859b4170c2a5eb2d30e8c7fb37b78e1"} Feb 23 18:48:03 crc kubenswrapper[4768]: I0223 18:48:03.154882 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:48:03 crc kubenswrapper[4768]: I0223 18:48:03.156750 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" event={"ID":"02eb4c80-855b-4590-b09e-d6e6b7919f74","Type":"ContainerStarted","Data":"b0fdc5d0e083f70c38d58ea78a2dbaf0fb5f6125b9ef5a23ae31a5e6bfe07e82"} Feb 23 18:48:03 crc kubenswrapper[4768]: I0223 18:48:03.157087 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:48:03 crc kubenswrapper[4768]: I0223 18:48:03.195959 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" podStartSLOduration=34.497717524 podStartE2EDuration="38.195934346s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:58.364305278 +0000 UTC m=+873.754791078" lastFinishedPulling="2026-02-23 18:48:02.06252209 +0000 UTC m=+877.453007900" observedRunningTime="2026-02-23 18:48:03.17890187 +0000 UTC m=+878.569387710" watchObservedRunningTime="2026-02-23 18:48:03.195934346 +0000 UTC m=+878.586420176" Feb 23 18:48:03 crc kubenswrapper[4768]: I0223 18:48:03.203348 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" podStartSLOduration=34.428630604 podStartE2EDuration="38.203329308s" podCreationTimestamp="2026-02-23 18:47:25 +0000 UTC" firstStartedPulling="2026-02-23 18:47:58.275052596 +0000 UTC m=+873.665538406" lastFinishedPulling="2026-02-23 18:48:02.04975131 +0000 UTC m=+877.440237110" observedRunningTime="2026-02-23 18:48:03.199054941 +0000 UTC m=+878.589540751" watchObservedRunningTime="2026-02-23 18:48:03.203329308 +0000 UTC m=+878.593815118" Feb 23 18:48:07 crc kubenswrapper[4768]: I0223 18:48:07.790920 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-gn242" Feb 23 18:48:07 crc kubenswrapper[4768]: I0223 18:48:07.920373 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69" Feb 23 18:48:08 crc kubenswrapper[4768]: I0223 18:48:08.478119 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7dfcb74874-dxkzr" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.563512 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58kds"] Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.565517 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-58kds" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.568805 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.569025 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.572479 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-cqvzj" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.572560 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.573804 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58kds"] Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.588953 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab73f56-492c-4ef0-bcf5-467174d29131-config\") pod \"dnsmasq-dns-675f4bcbfc-58kds\" (UID: \"4ab73f56-492c-4ef0-bcf5-467174d29131\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58kds" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.589031 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq2hc\" (UniqueName: \"kubernetes.io/projected/4ab73f56-492c-4ef0-bcf5-467174d29131-kube-api-access-kq2hc\") pod \"dnsmasq-dns-675f4bcbfc-58kds\" (UID: \"4ab73f56-492c-4ef0-bcf5-467174d29131\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58kds" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.622136 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fmc4w"] Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.623537 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.625531 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.646630 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fmc4w"] Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.690437 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq2hc\" (UniqueName: \"kubernetes.io/projected/4ab73f56-492c-4ef0-bcf5-467174d29131-kube-api-access-kq2hc\") pod \"dnsmasq-dns-675f4bcbfc-58kds\" (UID: \"4ab73f56-492c-4ef0-bcf5-467174d29131\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58kds" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.690742 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab73f56-492c-4ef0-bcf5-467174d29131-config\") pod \"dnsmasq-dns-675f4bcbfc-58kds\" (UID: \"4ab73f56-492c-4ef0-bcf5-467174d29131\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58kds" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.691678 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab73f56-492c-4ef0-bcf5-467174d29131-config\") pod \"dnsmasq-dns-675f4bcbfc-58kds\" (UID: \"4ab73f56-492c-4ef0-bcf5-467174d29131\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58kds" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.709157 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq2hc\" (UniqueName: \"kubernetes.io/projected/4ab73f56-492c-4ef0-bcf5-467174d29131-kube-api-access-kq2hc\") pod \"dnsmasq-dns-675f4bcbfc-58kds\" (UID: \"4ab73f56-492c-4ef0-bcf5-467174d29131\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58kds" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.792043 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6722ff9e-e2d5-4802-ad16-59d71dcc1544-config\") pod \"dnsmasq-dns-78dd6ddcc-fmc4w\" (UID: \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.792146 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vh5q\" (UniqueName: \"kubernetes.io/projected/6722ff9e-e2d5-4802-ad16-59d71dcc1544-kube-api-access-6vh5q\") pod \"dnsmasq-dns-78dd6ddcc-fmc4w\" (UID: \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.792285 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6722ff9e-e2d5-4802-ad16-59d71dcc1544-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fmc4w\" (UID: \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.883222 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-58kds" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.893494 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6722ff9e-e2d5-4802-ad16-59d71dcc1544-config\") pod \"dnsmasq-dns-78dd6ddcc-fmc4w\" (UID: \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.893592 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vh5q\" (UniqueName: \"kubernetes.io/projected/6722ff9e-e2d5-4802-ad16-59d71dcc1544-kube-api-access-6vh5q\") pod \"dnsmasq-dns-78dd6ddcc-fmc4w\" (UID: \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.893693 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6722ff9e-e2d5-4802-ad16-59d71dcc1544-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fmc4w\" (UID: \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.894499 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6722ff9e-e2d5-4802-ad16-59d71dcc1544-config\") pod \"dnsmasq-dns-78dd6ddcc-fmc4w\" (UID: \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.895083 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6722ff9e-e2d5-4802-ad16-59d71dcc1544-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fmc4w\" (UID: \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.915871 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vh5q\" (UniqueName: \"kubernetes.io/projected/6722ff9e-e2d5-4802-ad16-59d71dcc1544-kube-api-access-6vh5q\") pod \"dnsmasq-dns-78dd6ddcc-fmc4w\" (UID: \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" Feb 23 18:48:28 crc kubenswrapper[4768]: I0223 18:48:28.939829 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" Feb 23 18:48:29 crc kubenswrapper[4768]: I0223 18:48:29.384638 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58kds"] Feb 23 18:48:29 crc kubenswrapper[4768]: W0223 18:48:29.387238 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ab73f56_492c_4ef0_bcf5_467174d29131.slice/crio-a2d27f4417bc126ed439790af95894d7fe591afbd7635b0204b478271157f357 WatchSource:0}: Error finding container a2d27f4417bc126ed439790af95894d7fe591afbd7635b0204b478271157f357: Status 404 returned error can't find the container with id a2d27f4417bc126ed439790af95894d7fe591afbd7635b0204b478271157f357 Feb 23 18:48:29 crc kubenswrapper[4768]: I0223 18:48:29.410083 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fmc4w"] Feb 23 18:48:29 crc kubenswrapper[4768]: I0223 18:48:29.418291 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-58kds" event={"ID":"4ab73f56-492c-4ef0-bcf5-467174d29131","Type":"ContainerStarted","Data":"a2d27f4417bc126ed439790af95894d7fe591afbd7635b0204b478271157f357"} Feb 23 18:48:30 crc kubenswrapper[4768]: I0223 18:48:30.425402 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" event={"ID":"6722ff9e-e2d5-4802-ad16-59d71dcc1544","Type":"ContainerStarted","Data":"512fc32f4ba4bb70adf8dc4e654b8783ea9d5b740dc09ceaaa3cb47e9daa0c88"} Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.378717 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58kds"] Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.400718 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ztvvk"] Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.402734 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.433620 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ztvvk"] Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.540771 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0525cc76-435b-409d-ad19-cc48c44f2cbf-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ztvvk\" (UID: \"0525cc76-435b-409d-ad19-cc48c44f2cbf\") " pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.540863 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8lmb\" (UniqueName: \"kubernetes.io/projected/0525cc76-435b-409d-ad19-cc48c44f2cbf-kube-api-access-v8lmb\") pod \"dnsmasq-dns-666b6646f7-ztvvk\" (UID: \"0525cc76-435b-409d-ad19-cc48c44f2cbf\") " pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.540909 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0525cc76-435b-409d-ad19-cc48c44f2cbf-config\") pod \"dnsmasq-dns-666b6646f7-ztvvk\" (UID: \"0525cc76-435b-409d-ad19-cc48c44f2cbf\") " pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.642664 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8lmb\" (UniqueName: \"kubernetes.io/projected/0525cc76-435b-409d-ad19-cc48c44f2cbf-kube-api-access-v8lmb\") pod \"dnsmasq-dns-666b6646f7-ztvvk\" (UID: \"0525cc76-435b-409d-ad19-cc48c44f2cbf\") " pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.642721 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0525cc76-435b-409d-ad19-cc48c44f2cbf-config\") pod \"dnsmasq-dns-666b6646f7-ztvvk\" (UID: \"0525cc76-435b-409d-ad19-cc48c44f2cbf\") " pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.642786 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0525cc76-435b-409d-ad19-cc48c44f2cbf-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ztvvk\" (UID: \"0525cc76-435b-409d-ad19-cc48c44f2cbf\") " pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.643740 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0525cc76-435b-409d-ad19-cc48c44f2cbf-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ztvvk\" (UID: \"0525cc76-435b-409d-ad19-cc48c44f2cbf\") " pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.643756 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0525cc76-435b-409d-ad19-cc48c44f2cbf-config\") pod \"dnsmasq-dns-666b6646f7-ztvvk\" (UID: \"0525cc76-435b-409d-ad19-cc48c44f2cbf\") " pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.655271 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fmc4w"] Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.665770 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8lmb\" (UniqueName: \"kubernetes.io/projected/0525cc76-435b-409d-ad19-cc48c44f2cbf-kube-api-access-v8lmb\") pod \"dnsmasq-dns-666b6646f7-ztvvk\" (UID: \"0525cc76-435b-409d-ad19-cc48c44f2cbf\") " pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.685439 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-88vfh"] Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.690361 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.698935 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-88vfh"] Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.732916 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.846223 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-config\") pod \"dnsmasq-dns-57d769cc4f-88vfh\" (UID: \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\") " pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.846324 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-88vfh\" (UID: \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\") " pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.846381 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxqch\" (UniqueName: \"kubernetes.io/projected/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-kube-api-access-jxqch\") pod \"dnsmasq-dns-57d769cc4f-88vfh\" (UID: \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\") " pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.947749 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-88vfh\" (UID: \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\") " pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.947822 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxqch\" (UniqueName: \"kubernetes.io/projected/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-kube-api-access-jxqch\") pod \"dnsmasq-dns-57d769cc4f-88vfh\" (UID: \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\") " pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.947886 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-config\") pod \"dnsmasq-dns-57d769cc4f-88vfh\" (UID: \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\") " pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.949206 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-config\") pod \"dnsmasq-dns-57d769cc4f-88vfh\" (UID: \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\") " pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.949212 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-88vfh\" (UID: \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\") " pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" Feb 23 18:48:31 crc kubenswrapper[4768]: I0223 18:48:31.974235 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxqch\" (UniqueName: \"kubernetes.io/projected/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-kube-api-access-jxqch\") pod \"dnsmasq-dns-57d769cc4f-88vfh\" (UID: \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\") " pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.023435 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.081723 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ztvvk"] Feb 23 18:48:32 crc kubenswrapper[4768]: W0223 18:48:32.093711 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0525cc76_435b_409d_ad19_cc48c44f2cbf.slice/crio-7591b13ebc7d7fdeb5df728651a7a73b106dff38b385d066dcdc58363277165c WatchSource:0}: Error finding container 7591b13ebc7d7fdeb5df728651a7a73b106dff38b385d066dcdc58363277165c: Status 404 returned error can't find the container with id 7591b13ebc7d7fdeb5df728651a7a73b106dff38b385d066dcdc58363277165c Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.105644 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.288167 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-88vfh"] Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.446114 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" event={"ID":"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f","Type":"ContainerStarted","Data":"daba75694e24073b59f547bca3a5909030ef87ac96a919acc3d2610362f9233f"} Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.447883 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" event={"ID":"0525cc76-435b-409d-ad19-cc48c44f2cbf","Type":"ContainerStarted","Data":"7591b13ebc7d7fdeb5df728651a7a73b106dff38b385d066dcdc58363277165c"} Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.517445 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.521102 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.532320 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.533355 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.533379 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.533437 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.533580 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bglfm" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.533354 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.533900 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.534367 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.659917 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.660012 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a52ce7bc-e9a8-474d-87de-598d337bc360-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.660078 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.660127 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.660153 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.660286 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.660419 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xphq4\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-kube-api-access-xphq4\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.660519 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.660604 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.660660 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a52ce7bc-e9a8-474d-87de-598d337bc360-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.660728 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-config-data\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.761727 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.761794 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a52ce7bc-e9a8-474d-87de-598d337bc360-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.761813 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.761845 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.761861 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.761876 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.761903 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xphq4\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-kube-api-access-xphq4\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.761926 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.761950 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.761970 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a52ce7bc-e9a8-474d-87de-598d337bc360-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.761990 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-config-data\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.762772 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-config-data\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.763593 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.764594 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.765108 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.765809 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.766805 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.769122 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.769269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a52ce7bc-e9a8-474d-87de-598d337bc360-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.770565 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.775193 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a52ce7bc-e9a8-474d-87de-598d337bc360-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.783001 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xphq4\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-kube-api-access-xphq4\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.793348 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.832556 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.835668 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.835781 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.843285 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.843523 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dfxbv" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.843628 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.843643 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.843669 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.843740 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.843791 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.856030 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.967397 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2cb8a262-174b-47ef-adb6-a67384a373f1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.967461 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.967492 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.967563 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.967589 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.967877 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.968032 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2cb8a262-174b-47ef-adb6-a67384a373f1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.968364 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.968395 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n9cp\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-kube-api-access-4n9cp\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.968532 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:32 crc kubenswrapper[4768]: I0223 18:48:32.968580 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.069647 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2cb8a262-174b-47ef-adb6-a67384a373f1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.069713 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.069735 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n9cp\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-kube-api-access-4n9cp\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.069765 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.069787 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.069819 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.069840 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2cb8a262-174b-47ef-adb6-a67384a373f1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.069857 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.069882 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.069899 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.069927 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.074200 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2cb8a262-174b-47ef-adb6-a67384a373f1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.074645 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.075041 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.075062 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.075768 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.077264 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.078234 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.078986 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.081825 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2cb8a262-174b-47ef-adb6-a67384a373f1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.081930 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.102743 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n9cp\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-kube-api-access-4n9cp\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.120262 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.200883 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.638573 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 18:48:33 crc kubenswrapper[4768]: W0223 18:48:33.665344 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda52ce7bc_e9a8_474d_87de_598d337bc360.slice/crio-ef933112982f1aebaf01c0fcf8723602b73a21e08ecbb42b5c90270093b5f808 WatchSource:0}: Error finding container ef933112982f1aebaf01c0fcf8723602b73a21e08ecbb42b5c90270093b5f808: Status 404 returned error can't find the container with id ef933112982f1aebaf01c0fcf8723602b73a21e08ecbb42b5c90270093b5f808 Feb 23 18:48:33 crc kubenswrapper[4768]: I0223 18:48:33.986317 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.021188 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.025242 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.034979 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.035224 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.036977 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.040373 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.043239 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-prz7v" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.050835 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.213227 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.213353 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.213407 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.213441 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.213463 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-kolla-config\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.213494 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.213512 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-config-data-default\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.213528 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glkg8\" (UniqueName: \"kubernetes.io/projected/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-kube-api-access-glkg8\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.315238 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.315311 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-kolla-config\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.315345 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.315370 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glkg8\" (UniqueName: \"kubernetes.io/projected/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-kube-api-access-glkg8\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.315403 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-config-data-default\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.315434 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.315464 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.315513 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.316149 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.317056 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.320192 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-config-data-default\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.328715 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.332365 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-kolla-config\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.332466 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glkg8\" (UniqueName: \"kubernetes.io/projected/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-kube-api-access-glkg8\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.334914 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.338528 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2d53e56-3a7e-48fa-b0ea-59b932d3b25a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.348878 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a\") " pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.376685 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.476439 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a52ce7bc-e9a8-474d-87de-598d337bc360","Type":"ContainerStarted","Data":"ef933112982f1aebaf01c0fcf8723602b73a21e08ecbb42b5c90270093b5f808"} Feb 23 18:48:34 crc kubenswrapper[4768]: I0223 18:48:34.477714 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2cb8a262-174b-47ef-adb6-a67384a373f1","Type":"ContainerStarted","Data":"26b9757c2a1c332ff3224f10ffc00eb3153a22859a91180c0077bb7c607fa3ab"} Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.296521 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.297726 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.303086 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.303207 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-wcmmq" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.303327 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.303529 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.347637 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.443400 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.443469 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.443497 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-286zg\" (UniqueName: \"kubernetes.io/projected/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-kube-api-access-286zg\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.443548 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.443620 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.443646 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.443694 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.443829 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.545601 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.545653 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.545673 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-286zg\" (UniqueName: \"kubernetes.io/projected/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-kube-api-access-286zg\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.545708 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.545730 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.545747 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.545765 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.545812 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.547721 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.547838 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.548937 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.549226 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.550238 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.560842 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.561622 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.569621 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-286zg\" (UniqueName: \"kubernetes.io/projected/e2b0c66e-d534-4e7d-91dc-f05f5f857a43-kube-api-access-286zg\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.574121 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e2b0c66e-d534-4e7d-91dc-f05f5f857a43\") " pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.645163 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.729144 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.730102 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.732491 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.735882 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-fc5w7" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.736051 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.759599 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.853967 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/065294f2-15e0-4aeb-9002-9602051bf4ff-config-data\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.854020 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065294f2-15e0-4aeb-9002-9602051bf4ff-combined-ca-bundle\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.854052 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ftqj\" (UniqueName: \"kubernetes.io/projected/065294f2-15e0-4aeb-9002-9602051bf4ff-kube-api-access-6ftqj\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.854083 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/065294f2-15e0-4aeb-9002-9602051bf4ff-kolla-config\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.854117 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/065294f2-15e0-4aeb-9002-9602051bf4ff-memcached-tls-certs\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.955567 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/065294f2-15e0-4aeb-9002-9602051bf4ff-config-data\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.955610 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065294f2-15e0-4aeb-9002-9602051bf4ff-combined-ca-bundle\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.955640 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftqj\" (UniqueName: \"kubernetes.io/projected/065294f2-15e0-4aeb-9002-9602051bf4ff-kube-api-access-6ftqj\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.955671 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/065294f2-15e0-4aeb-9002-9602051bf4ff-kolla-config\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.955705 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/065294f2-15e0-4aeb-9002-9602051bf4ff-memcached-tls-certs\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.957648 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/065294f2-15e0-4aeb-9002-9602051bf4ff-config-data\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.957728 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/065294f2-15e0-4aeb-9002-9602051bf4ff-kolla-config\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.967746 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/065294f2-15e0-4aeb-9002-9602051bf4ff-memcached-tls-certs\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.974408 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065294f2-15e0-4aeb-9002-9602051bf4ff-combined-ca-bundle\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:35 crc kubenswrapper[4768]: I0223 18:48:35.977078 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ftqj\" (UniqueName: \"kubernetes.io/projected/065294f2-15e0-4aeb-9002-9602051bf4ff-kube-api-access-6ftqj\") pod \"memcached-0\" (UID: \"065294f2-15e0-4aeb-9002-9602051bf4ff\") " pod="openstack/memcached-0" Feb 23 18:48:36 crc kubenswrapper[4768]: I0223 18:48:36.051422 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 23 18:48:38 crc kubenswrapper[4768]: I0223 18:48:38.162360 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 18:48:38 crc kubenswrapper[4768]: I0223 18:48:38.165690 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 18:48:38 crc kubenswrapper[4768]: I0223 18:48:38.170361 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-h6s24" Feb 23 18:48:38 crc kubenswrapper[4768]: I0223 18:48:38.171677 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 18:48:38 crc kubenswrapper[4768]: I0223 18:48:38.295533 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw8n8\" (UniqueName: \"kubernetes.io/projected/1db6c967-62ff-4db3-b37d-152bdb673d74-kube-api-access-jw8n8\") pod \"kube-state-metrics-0\" (UID: \"1db6c967-62ff-4db3-b37d-152bdb673d74\") " pod="openstack/kube-state-metrics-0" Feb 23 18:48:38 crc kubenswrapper[4768]: I0223 18:48:38.397389 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw8n8\" (UniqueName: \"kubernetes.io/projected/1db6c967-62ff-4db3-b37d-152bdb673d74-kube-api-access-jw8n8\") pod \"kube-state-metrics-0\" (UID: \"1db6c967-62ff-4db3-b37d-152bdb673d74\") " pod="openstack/kube-state-metrics-0" Feb 23 18:48:38 crc kubenswrapper[4768]: I0223 18:48:38.416976 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw8n8\" (UniqueName: \"kubernetes.io/projected/1db6c967-62ff-4db3-b37d-152bdb673d74-kube-api-access-jw8n8\") pod \"kube-state-metrics-0\" (UID: \"1db6c967-62ff-4db3-b37d-152bdb673d74\") " pod="openstack/kube-state-metrics-0" Feb 23 18:48:38 crc kubenswrapper[4768]: I0223 18:48:38.484959 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.023046 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.033054 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.037070 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.037487 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.041764 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.044077 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.045555 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.045582 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-t6nkc" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.164559 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b458d35-3ae1-4a39-b1e5-dcfef430f299-config\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.164607 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b458d35-3ae1-4a39-b1e5-dcfef430f299-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.164637 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b458d35-3ae1-4a39-b1e5-dcfef430f299-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.164654 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.164671 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b458d35-3ae1-4a39-b1e5-dcfef430f299-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.164707 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b458d35-3ae1-4a39-b1e5-dcfef430f299-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.164768 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b458d35-3ae1-4a39-b1e5-dcfef430f299-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.164800 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh8q9\" (UniqueName: \"kubernetes.io/projected/3b458d35-3ae1-4a39-b1e5-dcfef430f299-kube-api-access-sh8q9\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.266576 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh8q9\" (UniqueName: \"kubernetes.io/projected/3b458d35-3ae1-4a39-b1e5-dcfef430f299-kube-api-access-sh8q9\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.266629 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b458d35-3ae1-4a39-b1e5-dcfef430f299-config\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.266649 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b458d35-3ae1-4a39-b1e5-dcfef430f299-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.266681 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.266695 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b458d35-3ae1-4a39-b1e5-dcfef430f299-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.266728 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b458d35-3ae1-4a39-b1e5-dcfef430f299-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.266765 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b458d35-3ae1-4a39-b1e5-dcfef430f299-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.266824 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b458d35-3ae1-4a39-b1e5-dcfef430f299-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.269729 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.271817 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3b458d35-3ae1-4a39-b1e5-dcfef430f299-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.272811 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b458d35-3ae1-4a39-b1e5-dcfef430f299-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.273273 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b458d35-3ae1-4a39-b1e5-dcfef430f299-config\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.274008 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b458d35-3ae1-4a39-b1e5-dcfef430f299-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.278023 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b458d35-3ae1-4a39-b1e5-dcfef430f299-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.287009 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b458d35-3ae1-4a39-b1e5-dcfef430f299-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.298318 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh8q9\" (UniqueName: \"kubernetes.io/projected/3b458d35-3ae1-4a39-b1e5-dcfef430f299-kube-api-access-sh8q9\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.315388 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"3b458d35-3ae1-4a39-b1e5-dcfef430f299\") " pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:41 crc kubenswrapper[4768]: I0223 18:48:41.354725 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.031726 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7xj45"] Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.032832 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.034961 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.035388 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7g4v8" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.048256 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.048791 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7xj45"] Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.087679 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9r6tg"] Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.093635 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.100856 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9r6tg"] Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.183619 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-var-lib\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.183678 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-var-run\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.183699 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj8ws\" (UniqueName: \"kubernetes.io/projected/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-kube-api-access-fj8ws\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.183724 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6c33d166-1e3e-46c5-a725-472499a5efab-var-log-ovn\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.183766 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c33d166-1e3e-46c5-a725-472499a5efab-combined-ca-bundle\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.183790 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6c33d166-1e3e-46c5-a725-472499a5efab-var-run\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.183807 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c33d166-1e3e-46c5-a725-472499a5efab-scripts\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.183886 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c33d166-1e3e-46c5-a725-472499a5efab-var-run-ovn\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.183908 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-scripts\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.183929 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-var-log\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.183964 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c33d166-1e3e-46c5-a725-472499a5efab-ovn-controller-tls-certs\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.183986 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-etc-ovs\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.184007 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7djk\" (UniqueName: \"kubernetes.io/projected/6c33d166-1e3e-46c5-a725-472499a5efab-kube-api-access-q7djk\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.286413 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-etc-ovs\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.286479 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7djk\" (UniqueName: \"kubernetes.io/projected/6c33d166-1e3e-46c5-a725-472499a5efab-kube-api-access-q7djk\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.286544 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-var-lib\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.286597 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-var-run\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.286625 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj8ws\" (UniqueName: \"kubernetes.io/projected/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-kube-api-access-fj8ws\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.286665 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6c33d166-1e3e-46c5-a725-472499a5efab-var-log-ovn\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.286701 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c33d166-1e3e-46c5-a725-472499a5efab-combined-ca-bundle\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.286745 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6c33d166-1e3e-46c5-a725-472499a5efab-var-run\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.286770 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c33d166-1e3e-46c5-a725-472499a5efab-scripts\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.286832 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c33d166-1e3e-46c5-a725-472499a5efab-var-run-ovn\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.286866 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-scripts\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.286896 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-var-log\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.286920 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c33d166-1e3e-46c5-a725-472499a5efab-ovn-controller-tls-certs\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.287719 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6c33d166-1e3e-46c5-a725-472499a5efab-var-log-ovn\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.287897 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-etc-ovs\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.288292 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-var-lib\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.288414 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-var-run\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.288983 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6c33d166-1e3e-46c5-a725-472499a5efab-var-run\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.291029 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6c33d166-1e3e-46c5-a725-472499a5efab-scripts\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.291360 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-var-log\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.291530 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6c33d166-1e3e-46c5-a725-472499a5efab-var-run-ovn\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.291979 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-scripts\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.295623 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c33d166-1e3e-46c5-a725-472499a5efab-ovn-controller-tls-certs\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.295645 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c33d166-1e3e-46c5-a725-472499a5efab-combined-ca-bundle\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.303313 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj8ws\" (UniqueName: \"kubernetes.io/projected/3c7bf964-ae59-40e5-9a0c-8fd8068b6695-kube-api-access-fj8ws\") pod \"ovn-controller-ovs-9r6tg\" (UID: \"3c7bf964-ae59-40e5-9a0c-8fd8068b6695\") " pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.306026 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7djk\" (UniqueName: \"kubernetes.io/projected/6c33d166-1e3e-46c5-a725-472499a5efab-kube-api-access-q7djk\") pod \"ovn-controller-7xj45\" (UID: \"6c33d166-1e3e-46c5-a725-472499a5efab\") " pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.355439 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xj45" Feb 23 18:48:42 crc kubenswrapper[4768]: I0223 18:48:42.424264 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.089785 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.091435 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.093494 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.093723 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-ddh72" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.094080 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.095448 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.112811 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.239057 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a43d0d6-32a5-4617-8613-e7fb22a39303-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.239210 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a43d0d6-32a5-4617-8613-e7fb22a39303-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.239288 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxlpn\" (UniqueName: \"kubernetes.io/projected/7a43d0d6-32a5-4617-8613-e7fb22a39303-kube-api-access-pxlpn\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.239320 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a43d0d6-32a5-4617-8613-e7fb22a39303-config\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.239360 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.239672 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a43d0d6-32a5-4617-8613-e7fb22a39303-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.240370 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a43d0d6-32a5-4617-8613-e7fb22a39303-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.240404 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a43d0d6-32a5-4617-8613-e7fb22a39303-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.342721 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a43d0d6-32a5-4617-8613-e7fb22a39303-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.342804 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a43d0d6-32a5-4617-8613-e7fb22a39303-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.342889 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a43d0d6-32a5-4617-8613-e7fb22a39303-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.344791 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a43d0d6-32a5-4617-8613-e7fb22a39303-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.344860 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxlpn\" (UniqueName: \"kubernetes.io/projected/7a43d0d6-32a5-4617-8613-e7fb22a39303-kube-api-access-pxlpn\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.344895 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a43d0d6-32a5-4617-8613-e7fb22a39303-config\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.344940 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.344994 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a43d0d6-32a5-4617-8613-e7fb22a39303-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.345070 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7a43d0d6-32a5-4617-8613-e7fb22a39303-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.345779 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.346315 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a43d0d6-32a5-4617-8613-e7fb22a39303-config\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.350941 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7a43d0d6-32a5-4617-8613-e7fb22a39303-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.352152 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a43d0d6-32a5-4617-8613-e7fb22a39303-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.359001 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a43d0d6-32a5-4617-8613-e7fb22a39303-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.359632 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a43d0d6-32a5-4617-8613-e7fb22a39303-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.363525 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxlpn\" (UniqueName: \"kubernetes.io/projected/7a43d0d6-32a5-4617-8613-e7fb22a39303-kube-api-access-pxlpn\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.373331 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7a43d0d6-32a5-4617-8613-e7fb22a39303\") " pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:45 crc kubenswrapper[4768]: I0223 18:48:45.426103 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 23 18:48:48 crc kubenswrapper[4768]: I0223 18:48:48.981294 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 23 18:48:49 crc kubenswrapper[4768]: E0223 18:48:49.756670 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 18:48:49 crc kubenswrapper[4768]: E0223 18:48:49.757629 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8lmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-ztvvk_openstack(0525cc76-435b-409d-ad19-cc48c44f2cbf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 18:48:49 crc kubenswrapper[4768]: E0223 18:48:49.759441 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" podUID="0525cc76-435b-409d-ad19-cc48c44f2cbf" Feb 23 18:48:49 crc kubenswrapper[4768]: E0223 18:48:49.803783 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 18:48:49 crc kubenswrapper[4768]: E0223 18:48:49.803927 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kq2hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-58kds_openstack(4ab73f56-492c-4ef0-bcf5-467174d29131): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 18:48:49 crc kubenswrapper[4768]: E0223 18:48:49.805136 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-58kds" podUID="4ab73f56-492c-4ef0-bcf5-467174d29131" Feb 23 18:48:49 crc kubenswrapper[4768]: E0223 18:48:49.807319 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 18:48:49 crc kubenswrapper[4768]: E0223 18:48:49.807543 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jxqch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-88vfh_openstack(2d4b920a-fcdc-4f02-96a2-6ee2dd23601f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 18:48:49 crc kubenswrapper[4768]: E0223 18:48:49.808600 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" podUID="2d4b920a-fcdc-4f02-96a2-6ee2dd23601f" Feb 23 18:48:49 crc kubenswrapper[4768]: E0223 18:48:49.862376 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 23 18:48:49 crc kubenswrapper[4768]: E0223 18:48:49.862876 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vh5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-fmc4w_openstack(6722ff9e-e2d5-4802-ad16-59d71dcc1544): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 18:48:49 crc kubenswrapper[4768]: E0223 18:48:49.864048 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" podUID="6722ff9e-e2d5-4802-ad16-59d71dcc1544" Feb 23 18:48:50 crc kubenswrapper[4768]: I0223 18:48:50.306187 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 23 18:48:50 crc kubenswrapper[4768]: W0223 18:48:50.336297 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2d53e56_3a7e_48fa_b0ea_59b932d3b25a.slice/crio-90997fbb82f916d9bfd1be54f80fbe04bd0ca0ad44891ee7b361037770a6aa51 WatchSource:0}: Error finding container 90997fbb82f916d9bfd1be54f80fbe04bd0ca0ad44891ee7b361037770a6aa51: Status 404 returned error can't find the container with id 90997fbb82f916d9bfd1be54f80fbe04bd0ca0ad44891ee7b361037770a6aa51 Feb 23 18:48:50 crc kubenswrapper[4768]: I0223 18:48:50.420022 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 23 18:48:50 crc kubenswrapper[4768]: I0223 18:48:50.428122 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7xj45"] Feb 23 18:48:50 crc kubenswrapper[4768]: I0223 18:48:50.546358 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 23 18:48:50 crc kubenswrapper[4768]: W0223 18:48:50.551219 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b458d35_3ae1_4a39_b1e5_dcfef430f299.slice/crio-498335458d8c442494b4e5d3371aa62022fb32b878459bf3c82339604bf4b5b1 WatchSource:0}: Error finding container 498335458d8c442494b4e5d3371aa62022fb32b878459bf3c82339604bf4b5b1: Status 404 returned error can't find the container with id 498335458d8c442494b4e5d3371aa62022fb32b878459bf3c82339604bf4b5b1 Feb 23 18:48:50 crc kubenswrapper[4768]: I0223 18:48:50.640880 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 18:48:50 crc kubenswrapper[4768]: I0223 18:48:50.649656 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"065294f2-15e0-4aeb-9002-9602051bf4ff","Type":"ContainerStarted","Data":"a9c1943c50b3cc0c640a73f55ca56430416852a09374a7e60443cf02a7230fb3"} Feb 23 18:48:50 crc kubenswrapper[4768]: I0223 18:48:50.653148 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3b458d35-3ae1-4a39-b1e5-dcfef430f299","Type":"ContainerStarted","Data":"498335458d8c442494b4e5d3371aa62022fb32b878459bf3c82339604bf4b5b1"} Feb 23 18:48:50 crc kubenswrapper[4768]: I0223 18:48:50.656346 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a","Type":"ContainerStarted","Data":"90997fbb82f916d9bfd1be54f80fbe04bd0ca0ad44891ee7b361037770a6aa51"} Feb 23 18:48:50 crc kubenswrapper[4768]: I0223 18:48:50.658322 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7xj45" event={"ID":"6c33d166-1e3e-46c5-a725-472499a5efab","Type":"ContainerStarted","Data":"34c026985ebde353dfcd28d4124e6a84d851b6ea7c420caa43253827c3de3501"} Feb 23 18:48:50 crc kubenswrapper[4768]: I0223 18:48:50.659944 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e2b0c66e-d534-4e7d-91dc-f05f5f857a43","Type":"ContainerStarted","Data":"8f7a2ef7c08ee1b1a0c67886bbb9cb730a6c073542e97bcb7975f5d099ab268a"} Feb 23 18:48:50 crc kubenswrapper[4768]: E0223 18:48:50.663504 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" podUID="0525cc76-435b-409d-ad19-cc48c44f2cbf" Feb 23 18:48:50 crc kubenswrapper[4768]: E0223 18:48:50.663820 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" podUID="2d4b920a-fcdc-4f02-96a2-6ee2dd23601f" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.149201 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.154347 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-58kds" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.271294 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6722ff9e-e2d5-4802-ad16-59d71dcc1544-config\") pod \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\" (UID: \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\") " Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.271389 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6722ff9e-e2d5-4802-ad16-59d71dcc1544-dns-svc\") pod \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\" (UID: \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\") " Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.271549 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vh5q\" (UniqueName: \"kubernetes.io/projected/6722ff9e-e2d5-4802-ad16-59d71dcc1544-kube-api-access-6vh5q\") pod \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\" (UID: \"6722ff9e-e2d5-4802-ad16-59d71dcc1544\") " Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.271572 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq2hc\" (UniqueName: \"kubernetes.io/projected/4ab73f56-492c-4ef0-bcf5-467174d29131-kube-api-access-kq2hc\") pod \"4ab73f56-492c-4ef0-bcf5-467174d29131\" (UID: \"4ab73f56-492c-4ef0-bcf5-467174d29131\") " Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.271761 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab73f56-492c-4ef0-bcf5-467174d29131-config\") pod \"4ab73f56-492c-4ef0-bcf5-467174d29131\" (UID: \"4ab73f56-492c-4ef0-bcf5-467174d29131\") " Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.271857 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6722ff9e-e2d5-4802-ad16-59d71dcc1544-config" (OuterVolumeSpecName: "config") pod "6722ff9e-e2d5-4802-ad16-59d71dcc1544" (UID: "6722ff9e-e2d5-4802-ad16-59d71dcc1544"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.272461 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6722ff9e-e2d5-4802-ad16-59d71dcc1544-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.272881 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab73f56-492c-4ef0-bcf5-467174d29131-config" (OuterVolumeSpecName: "config") pod "4ab73f56-492c-4ef0-bcf5-467174d29131" (UID: "4ab73f56-492c-4ef0-bcf5-467174d29131"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.273769 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6722ff9e-e2d5-4802-ad16-59d71dcc1544-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6722ff9e-e2d5-4802-ad16-59d71dcc1544" (UID: "6722ff9e-e2d5-4802-ad16-59d71dcc1544"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.302560 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6722ff9e-e2d5-4802-ad16-59d71dcc1544-kube-api-access-6vh5q" (OuterVolumeSpecName: "kube-api-access-6vh5q") pod "6722ff9e-e2d5-4802-ad16-59d71dcc1544" (UID: "6722ff9e-e2d5-4802-ad16-59d71dcc1544"). InnerVolumeSpecName "kube-api-access-6vh5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.302647 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab73f56-492c-4ef0-bcf5-467174d29131-kube-api-access-kq2hc" (OuterVolumeSpecName: "kube-api-access-kq2hc") pod "4ab73f56-492c-4ef0-bcf5-467174d29131" (UID: "4ab73f56-492c-4ef0-bcf5-467174d29131"). InnerVolumeSpecName "kube-api-access-kq2hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.374420 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vh5q\" (UniqueName: \"kubernetes.io/projected/6722ff9e-e2d5-4802-ad16-59d71dcc1544-kube-api-access-6vh5q\") on node \"crc\" DevicePath \"\"" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.374459 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq2hc\" (UniqueName: \"kubernetes.io/projected/4ab73f56-492c-4ef0-bcf5-467174d29131-kube-api-access-kq2hc\") on node \"crc\" DevicePath \"\"" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.374469 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab73f56-492c-4ef0-bcf5-467174d29131-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.374499 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6722ff9e-e2d5-4802-ad16-59d71dcc1544-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.416969 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.525586 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9r6tg"] Feb 23 18:48:51 crc kubenswrapper[4768]: W0223 18:48:51.542232 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c7bf964_ae59_40e5_9a0c_8fd8068b6695.slice/crio-db06e4787b276973ab616107ce2c961fcc57b8035b6490f420d9ef7225dce97f WatchSource:0}: Error finding container db06e4787b276973ab616107ce2c961fcc57b8035b6490f420d9ef7225dce97f: Status 404 returned error can't find the container with id db06e4787b276973ab616107ce2c961fcc57b8035b6490f420d9ef7225dce97f Feb 23 18:48:51 crc kubenswrapper[4768]: W0223 18:48:51.560812 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a43d0d6_32a5_4617_8613_e7fb22a39303.slice/crio-2d6a56253b18ea8f82e91778f99cbb14359e130a13f8c6e63925acc03a54439a WatchSource:0}: Error finding container 2d6a56253b18ea8f82e91778f99cbb14359e130a13f8c6e63925acc03a54439a: Status 404 returned error can't find the container with id 2d6a56253b18ea8f82e91778f99cbb14359e130a13f8c6e63925acc03a54439a Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.671368 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-58kds" event={"ID":"4ab73f56-492c-4ef0-bcf5-467174d29131","Type":"ContainerDied","Data":"a2d27f4417bc126ed439790af95894d7fe591afbd7635b0204b478271157f357"} Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.671675 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-58kds" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.675986 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" event={"ID":"6722ff9e-e2d5-4802-ad16-59d71dcc1544","Type":"ContainerDied","Data":"512fc32f4ba4bb70adf8dc4e654b8783ea9d5b740dc09ceaaa3cb47e9daa0c88"} Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.676162 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fmc4w" Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.677989 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1db6c967-62ff-4db3-b37d-152bdb673d74","Type":"ContainerStarted","Data":"13369a0723a26c42c35b191b9d19b4aa4034e2efc0c33987534c274de5bb86d5"} Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.680633 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a52ce7bc-e9a8-474d-87de-598d337bc360","Type":"ContainerStarted","Data":"8950944ed237e1903bb4e956e9e9496fa8c259943744c2c4afe591a90782d9cf"} Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.681633 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9r6tg" event={"ID":"3c7bf964-ae59-40e5-9a0c-8fd8068b6695","Type":"ContainerStarted","Data":"db06e4787b276973ab616107ce2c961fcc57b8035b6490f420d9ef7225dce97f"} Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.682987 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7a43d0d6-32a5-4617-8613-e7fb22a39303","Type":"ContainerStarted","Data":"2d6a56253b18ea8f82e91778f99cbb14359e130a13f8c6e63925acc03a54439a"} Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.690223 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2cb8a262-174b-47ef-adb6-a67384a373f1","Type":"ContainerStarted","Data":"173f567940bc9e7d12562c8311b0e6fa736edc2d7fb93fa710f6fe229b843f1c"} Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.710036 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58kds"] Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.718121 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58kds"] Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.779704 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fmc4w"] Feb 23 18:48:51 crc kubenswrapper[4768]: I0223 18:48:51.790560 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fmc4w"] Feb 23 18:48:53 crc kubenswrapper[4768]: I0223 18:48:53.318772 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab73f56-492c-4ef0-bcf5-467174d29131" path="/var/lib/kubelet/pods/4ab73f56-492c-4ef0-bcf5-467174d29131/volumes" Feb 23 18:48:53 crc kubenswrapper[4768]: I0223 18:48:53.319419 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6722ff9e-e2d5-4802-ad16-59d71dcc1544" path="/var/lib/kubelet/pods/6722ff9e-e2d5-4802-ad16-59d71dcc1544/volumes" Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.766505 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3b458d35-3ae1-4a39-b1e5-dcfef430f299","Type":"ContainerStarted","Data":"fe89aad8c9f3676651b6af1550ff7ace342ce4bb3a8732d83b8498dd2c4ace4b"} Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.771020 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a","Type":"ContainerStarted","Data":"fcdca0f273aa0d35569a8a67a8307db1185cafaf1af419f9a3a658a1e03d984f"} Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.775075 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7xj45" event={"ID":"6c33d166-1e3e-46c5-a725-472499a5efab","Type":"ContainerStarted","Data":"078a6baa58c2f439468b9e6c69866dfaeadcee8b593b58162a24693c36a812e3"} Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.775692 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-7xj45" Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.779836 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e2b0c66e-d534-4e7d-91dc-f05f5f857a43","Type":"ContainerStarted","Data":"b5e3c32420cd48bb6be7ec161fbdb0df1f1f9d7b194c943fc9e1e28af450693f"} Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.782081 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1db6c967-62ff-4db3-b37d-152bdb673d74","Type":"ContainerStarted","Data":"7171b990d311e9b89933d6c7670eacb3894fbdba5a14d235da85fc02ebbacbb0"} Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.782171 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.785458 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"065294f2-15e0-4aeb-9002-9602051bf4ff","Type":"ContainerStarted","Data":"56b142fd4b187b628bb770cc53a08c891a757792a62505d04cf47e2b7bde4450"} Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.786089 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.787885 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7a43d0d6-32a5-4617-8613-e7fb22a39303","Type":"ContainerStarted","Data":"c81daffc214f7b58e851c86c95fb908b9d62a981408d11ca35e34ba8d4f4b50e"} Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.793755 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9r6tg" event={"ID":"3c7bf964-ae59-40e5-9a0c-8fd8068b6695","Type":"ContainerStarted","Data":"dfbe2c888b6b8cf3517dcb9bc563b8a5c6fb627f0fa07ad2ce34ad423a986b22"} Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.844990 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7xj45" podStartSLOduration=9.856387464 podStartE2EDuration="17.844960385s" podCreationTimestamp="2026-02-23 18:48:42 +0000 UTC" firstStartedPulling="2026-02-23 18:48:50.48678337 +0000 UTC m=+925.877269170" lastFinishedPulling="2026-02-23 18:48:58.475356281 +0000 UTC m=+933.865842091" observedRunningTime="2026-02-23 18:48:59.829224393 +0000 UTC m=+935.219710213" watchObservedRunningTime="2026-02-23 18:48:59.844960385 +0000 UTC m=+935.235446195" Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.879993 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.115816175 podStartE2EDuration="24.879971514s" podCreationTimestamp="2026-02-23 18:48:35 +0000 UTC" firstStartedPulling="2026-02-23 18:48:50.432483642 +0000 UTC m=+925.822969442" lastFinishedPulling="2026-02-23 18:48:58.196638981 +0000 UTC m=+933.587124781" observedRunningTime="2026-02-23 18:48:59.869994731 +0000 UTC m=+935.260480561" watchObservedRunningTime="2026-02-23 18:48:59.879971514 +0000 UTC m=+935.270457304" Feb 23 18:48:59 crc kubenswrapper[4768]: I0223 18:48:59.918288 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.535479496 podStartE2EDuration="21.918266234s" podCreationTimestamp="2026-02-23 18:48:38 +0000 UTC" firstStartedPulling="2026-02-23 18:48:50.643697271 +0000 UTC m=+926.034183071" lastFinishedPulling="2026-02-23 18:48:59.026484019 +0000 UTC m=+934.416969809" observedRunningTime="2026-02-23 18:48:59.917163524 +0000 UTC m=+935.307649324" watchObservedRunningTime="2026-02-23 18:48:59.918266234 +0000 UTC m=+935.308752034" Feb 23 18:49:00 crc kubenswrapper[4768]: I0223 18:49:00.818790 4768 generic.go:334] "Generic (PLEG): container finished" podID="3c7bf964-ae59-40e5-9a0c-8fd8068b6695" containerID="dfbe2c888b6b8cf3517dcb9bc563b8a5c6fb627f0fa07ad2ce34ad423a986b22" exitCode=0 Feb 23 18:49:00 crc kubenswrapper[4768]: I0223 18:49:00.818876 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9r6tg" event={"ID":"3c7bf964-ae59-40e5-9a0c-8fd8068b6695","Type":"ContainerDied","Data":"dfbe2c888b6b8cf3517dcb9bc563b8a5c6fb627f0fa07ad2ce34ad423a986b22"} Feb 23 18:49:00 crc kubenswrapper[4768]: I0223 18:49:00.821447 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:49:00 crc kubenswrapper[4768]: I0223 18:49:00.821489 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9r6tg" event={"ID":"3c7bf964-ae59-40e5-9a0c-8fd8068b6695","Type":"ContainerStarted","Data":"f54d074475b8b1b03f537abc3ee215d11812adae59202b8f326b9dd2c18c4bf4"} Feb 23 18:49:00 crc kubenswrapper[4768]: I0223 18:49:00.821509 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9r6tg" event={"ID":"3c7bf964-ae59-40e5-9a0c-8fd8068b6695","Type":"ContainerStarted","Data":"16d10927364239369b720266e42e7654b8e2dc592c42be3dbc6fbbd5a1d66c0f"} Feb 23 18:49:00 crc kubenswrapper[4768]: I0223 18:49:00.821523 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:49:00 crc kubenswrapper[4768]: I0223 18:49:00.842787 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9r6tg" podStartSLOduration=12.008130576 podStartE2EDuration="18.842760246s" podCreationTimestamp="2026-02-23 18:48:42 +0000 UTC" firstStartedPulling="2026-02-23 18:48:51.54582369 +0000 UTC m=+926.936309490" lastFinishedPulling="2026-02-23 18:48:58.38045334 +0000 UTC m=+933.770939160" observedRunningTime="2026-02-23 18:49:00.839695202 +0000 UTC m=+936.230181002" watchObservedRunningTime="2026-02-23 18:49:00.842760246 +0000 UTC m=+936.233246046" Feb 23 18:49:01 crc kubenswrapper[4768]: I0223 18:49:01.838830 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7a43d0d6-32a5-4617-8613-e7fb22a39303","Type":"ContainerStarted","Data":"81b9e70e9e8762ba718ca33312e1f3c4d3712083a1e329ffdf4faf0306fae813"} Feb 23 18:49:01 crc kubenswrapper[4768]: I0223 18:49:01.844367 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"3b458d35-3ae1-4a39-b1e5-dcfef430f299","Type":"ContainerStarted","Data":"c7566829c88cbfc9871f9e60f40ac26f09095699de28a35ff5f0db7838a9ff78"} Feb 23 18:49:01 crc kubenswrapper[4768]: I0223 18:49:01.865473 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.477022966 podStartE2EDuration="17.86544764s" podCreationTimestamp="2026-02-23 18:48:44 +0000 UTC" firstStartedPulling="2026-02-23 18:48:51.573397346 +0000 UTC m=+926.963883146" lastFinishedPulling="2026-02-23 18:49:00.96182202 +0000 UTC m=+936.352307820" observedRunningTime="2026-02-23 18:49:01.863102905 +0000 UTC m=+937.253588745" watchObservedRunningTime="2026-02-23 18:49:01.86544764 +0000 UTC m=+937.255933450" Feb 23 18:49:01 crc kubenswrapper[4768]: I0223 18:49:01.886379 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.487465871 podStartE2EDuration="22.886362464s" podCreationTimestamp="2026-02-23 18:48:39 +0000 UTC" firstStartedPulling="2026-02-23 18:48:50.555114213 +0000 UTC m=+925.945600013" lastFinishedPulling="2026-02-23 18:49:00.954010806 +0000 UTC m=+936.344496606" observedRunningTime="2026-02-23 18:49:01.881837709 +0000 UTC m=+937.272323519" watchObservedRunningTime="2026-02-23 18:49:01.886362464 +0000 UTC m=+937.276848274" Feb 23 18:49:02 crc kubenswrapper[4768]: I0223 18:49:02.356417 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 23 18:49:02 crc kubenswrapper[4768]: I0223 18:49:02.407339 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 23 18:49:02 crc kubenswrapper[4768]: I0223 18:49:02.863175 4768 generic.go:334] "Generic (PLEG): container finished" podID="f2d53e56-3a7e-48fa-b0ea-59b932d3b25a" containerID="fcdca0f273aa0d35569a8a67a8307db1185cafaf1af419f9a3a658a1e03d984f" exitCode=0 Feb 23 18:49:02 crc kubenswrapper[4768]: I0223 18:49:02.863336 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a","Type":"ContainerDied","Data":"fcdca0f273aa0d35569a8a67a8307db1185cafaf1af419f9a3a658a1e03d984f"} Feb 23 18:49:02 crc kubenswrapper[4768]: I0223 18:49:02.867594 4768 generic.go:334] "Generic (PLEG): container finished" podID="e2b0c66e-d534-4e7d-91dc-f05f5f857a43" containerID="b5e3c32420cd48bb6be7ec161fbdb0df1f1f9d7b194c943fc9e1e28af450693f" exitCode=0 Feb 23 18:49:02 crc kubenswrapper[4768]: I0223 18:49:02.867737 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e2b0c66e-d534-4e7d-91dc-f05f5f857a43","Type":"ContainerDied","Data":"b5e3c32420cd48bb6be7ec161fbdb0df1f1f9d7b194c943fc9e1e28af450693f"} Feb 23 18:49:02 crc kubenswrapper[4768]: I0223 18:49:02.869490 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 23 18:49:03 crc kubenswrapper[4768]: I0223 18:49:03.427688 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 23 18:49:03 crc kubenswrapper[4768]: I0223 18:49:03.471591 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 23 18:49:03 crc kubenswrapper[4768]: I0223 18:49:03.890930 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f2d53e56-3a7e-48fa-b0ea-59b932d3b25a","Type":"ContainerStarted","Data":"25fdfbd677a5dc8d7e9e6622217a5b017604386e5ca175dee33649813dd2d412"} Feb 23 18:49:03 crc kubenswrapper[4768]: I0223 18:49:03.896553 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e2b0c66e-d534-4e7d-91dc-f05f5f857a43","Type":"ContainerStarted","Data":"b45b24efee5fd0cbc749afb06e5c7caff71fd20fc4ffebeabbb4461b3642c55e"} Feb 23 18:49:03 crc kubenswrapper[4768]: I0223 18:49:03.897159 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 23 18:49:03 crc kubenswrapper[4768]: I0223 18:49:03.920390 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.712435835 podStartE2EDuration="31.920360609s" podCreationTimestamp="2026-02-23 18:48:32 +0000 UTC" firstStartedPulling="2026-02-23 18:48:50.339647947 +0000 UTC m=+925.730133757" lastFinishedPulling="2026-02-23 18:48:58.547572731 +0000 UTC m=+933.938058531" observedRunningTime="2026-02-23 18:49:03.915232518 +0000 UTC m=+939.305718358" watchObservedRunningTime="2026-02-23 18:49:03.920360609 +0000 UTC m=+939.310846479" Feb 23 18:49:03 crc kubenswrapper[4768]: I0223 18:49:03.955850 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=21.517007617 podStartE2EDuration="29.955830561s" podCreationTimestamp="2026-02-23 18:48:34 +0000 UTC" firstStartedPulling="2026-02-23 18:48:49.757871399 +0000 UTC m=+925.148357199" lastFinishedPulling="2026-02-23 18:48:58.196694333 +0000 UTC m=+933.587180143" observedRunningTime="2026-02-23 18:49:03.94776904 +0000 UTC m=+939.338254870" watchObservedRunningTime="2026-02-23 18:49:03.955830561 +0000 UTC m=+939.346316371" Feb 23 18:49:04 crc kubenswrapper[4768]: I0223 18:49:04.378616 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 23 18:49:04 crc kubenswrapper[4768]: I0223 18:49:04.378660 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 23 18:49:04 crc kubenswrapper[4768]: I0223 18:49:04.975280 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 23 18:49:04 crc kubenswrapper[4768]: I0223 18:49:04.980194 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.255077 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-88vfh"] Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.318698 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-c45dt"] Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.323380 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.326408 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.335936 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f82s2"] Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.338563 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.345144 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f82s2"] Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.349760 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.359911 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-c45dt"] Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.454853 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.456352 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.461034 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wmnlx" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.461075 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.461083 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.461224 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.462790 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.493291 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.493358 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztxnr\" (UniqueName: \"kubernetes.io/projected/6aa0c761-4d02-416e-bd62-af70bbf8a593-kube-api-access-ztxnr\") pod \"dnsmasq-dns-7f896c8c65-f82s2\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.493383 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-combined-ca-bundle\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.493405 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-f82s2\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.493426 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-f82s2\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.493465 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-config\") pod \"dnsmasq-dns-7f896c8c65-f82s2\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.493489 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-ovs-rundir\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.493520 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-config\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.494638 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg6dd\" (UniqueName: \"kubernetes.io/projected/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-kube-api-access-hg6dd\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.494669 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-ovn-rundir\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.526640 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ztvvk"] Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.597325 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztxnr\" (UniqueName: \"kubernetes.io/projected/6aa0c761-4d02-416e-bd62-af70bbf8a593-kube-api-access-ztxnr\") pod \"dnsmasq-dns-7f896c8c65-f82s2\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.597393 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-f82s2\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.597422 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-f82s2\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.597450 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-combined-ca-bundle\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.597498 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe1b163-3686-4036-8f27-a4b600234d8a-config\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.597522 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-config\") pod \"dnsmasq-dns-7f896c8c65-f82s2\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.597550 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-ovs-rundir\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.599712 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-config\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.599750 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffe1b163-3686-4036-8f27-a4b600234d8a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.599802 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg6dd\" (UniqueName: \"kubernetes.io/projected/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-kube-api-access-hg6dd\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.599831 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-ovn-rundir\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.599868 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe1b163-3686-4036-8f27-a4b600234d8a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.599893 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe1b163-3686-4036-8f27-a4b600234d8a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.599936 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe1b163-3686-4036-8f27-a4b600234d8a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.599971 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.599999 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbkjc\" (UniqueName: \"kubernetes.io/projected/ffe1b163-3686-4036-8f27-a4b600234d8a-kube-api-access-dbkjc\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.600026 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffe1b163-3686-4036-8f27-a4b600234d8a-scripts\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.601616 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-f82s2\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.602170 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-ovn-rundir\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.606185 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-config\") pod \"dnsmasq-dns-7f896c8c65-f82s2\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.607423 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-ovs-rundir\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.608930 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-config\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.612399 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.602235 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-f82s2\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.621508 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-g5jph"] Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.622880 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.635164 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg6dd\" (UniqueName: \"kubernetes.io/projected/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-kube-api-access-hg6dd\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.640677 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.646564 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.646607 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.669673 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-g5jph"] Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.670208 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c73dca1-1a57-4c3a-8337-dba75d7e7b9c-combined-ca-bundle\") pod \"ovn-controller-metrics-c45dt\" (UID: \"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c\") " pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.701725 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe1b163-3686-4036-8f27-a4b600234d8a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.701785 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe1b163-3686-4036-8f27-a4b600234d8a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.701826 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe1b163-3686-4036-8f27-a4b600234d8a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.701854 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbkjc\" (UniqueName: \"kubernetes.io/projected/ffe1b163-3686-4036-8f27-a4b600234d8a-kube-api-access-dbkjc\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.701885 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffe1b163-3686-4036-8f27-a4b600234d8a-scripts\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.701938 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe1b163-3686-4036-8f27-a4b600234d8a-config\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.701969 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffe1b163-3686-4036-8f27-a4b600234d8a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.704479 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ffe1b163-3686-4036-8f27-a4b600234d8a-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.704804 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ffe1b163-3686-4036-8f27-a4b600234d8a-scripts\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.707079 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffe1b163-3686-4036-8f27-a4b600234d8a-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.711369 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe1b163-3686-4036-8f27-a4b600234d8a-config\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.713947 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztxnr\" (UniqueName: \"kubernetes.io/projected/6aa0c761-4d02-416e-bd62-af70bbf8a593-kube-api-access-ztxnr\") pod \"dnsmasq-dns-7f896c8c65-f82s2\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.714212 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe1b163-3686-4036-8f27-a4b600234d8a-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.720656 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/ffe1b163-3686-4036-8f27-a4b600234d8a-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.747172 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbkjc\" (UniqueName: \"kubernetes.io/projected/ffe1b163-3686-4036-8f27-a4b600234d8a-kube-api-access-dbkjc\") pod \"ovn-northd-0\" (UID: \"ffe1b163-3686-4036-8f27-a4b600234d8a\") " pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.788738 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.804088 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9zt9\" (UniqueName: \"kubernetes.io/projected/c2216fee-68a6-40fc-b747-a4f7c12a3bae-kube-api-access-j9zt9\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.804206 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.804261 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.804323 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-config\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.804359 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.879484 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.905179 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.905224 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.905290 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-config\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.905318 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.905379 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9zt9\" (UniqueName: \"kubernetes.io/projected/c2216fee-68a6-40fc-b747-a4f7c12a3bae-kube-api-access-j9zt9\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.906566 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.906809 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.906817 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-config\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.907570 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.934641 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9zt9\" (UniqueName: \"kubernetes.io/projected/c2216fee-68a6-40fc-b747-a4f7c12a3bae-kube-api-access-j9zt9\") pod \"dnsmasq-dns-86db49b7ff-g5jph\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.964662 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-c45dt" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.969368 4768 generic.go:334] "Generic (PLEG): container finished" podID="0525cc76-435b-409d-ad19-cc48c44f2cbf" containerID="2559cc0f8ea08d097891c6e72d0ba0eb2794668773df8d688c25d8d5c16e4f80" exitCode=0 Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.969435 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" event={"ID":"0525cc76-435b-409d-ad19-cc48c44f2cbf","Type":"ContainerDied","Data":"2559cc0f8ea08d097891c6e72d0ba0eb2794668773df8d688c25d8d5c16e4f80"} Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.978322 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.989485 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" Feb 23 18:49:05 crc kubenswrapper[4768]: I0223 18:49:05.989883 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-88vfh" event={"ID":"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f","Type":"ContainerDied","Data":"daba75694e24073b59f547bca3a5909030ef87ac96a919acc3d2610362f9233f"} Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.006670 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-dns-svc\") pod \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\" (UID: \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\") " Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.006976 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxqch\" (UniqueName: \"kubernetes.io/projected/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-kube-api-access-jxqch\") pod \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\" (UID: \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\") " Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.007123 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-config\") pod \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\" (UID: \"2d4b920a-fcdc-4f02-96a2-6ee2dd23601f\") " Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.007270 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d4b920a-fcdc-4f02-96a2-6ee2dd23601f" (UID: "2d4b920a-fcdc-4f02-96a2-6ee2dd23601f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.007560 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.008365 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-config" (OuterVolumeSpecName: "config") pod "2d4b920a-fcdc-4f02-96a2-6ee2dd23601f" (UID: "2d4b920a-fcdc-4f02-96a2-6ee2dd23601f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.010075 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-kube-api-access-jxqch" (OuterVolumeSpecName: "kube-api-access-jxqch") pod "2d4b920a-fcdc-4f02-96a2-6ee2dd23601f" (UID: "2d4b920a-fcdc-4f02-96a2-6ee2dd23601f"). InnerVolumeSpecName "kube-api-access-jxqch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.015363 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.053815 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.112273 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.113045 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxqch\" (UniqueName: \"kubernetes.io/projected/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f-kube-api-access-jxqch\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.388443 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-88vfh"] Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.403498 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-88vfh"] Feb 23 18:49:06 crc kubenswrapper[4768]: W0223 18:49:06.409216 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe1b163_3686_4036_8f27_a4b600234d8a.slice/crio-80b95ba7131a322508d63926e84d97516ead9a5a1d857941d0e426e4e3513d82 WatchSource:0}: Error finding container 80b95ba7131a322508d63926e84d97516ead9a5a1d857941d0e426e4e3513d82: Status 404 returned error can't find the container with id 80b95ba7131a322508d63926e84d97516ead9a5a1d857941d0e426e4e3513d82 Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.412787 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.422040 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.526595 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0525cc76-435b-409d-ad19-cc48c44f2cbf-config\") pod \"0525cc76-435b-409d-ad19-cc48c44f2cbf\" (UID: \"0525cc76-435b-409d-ad19-cc48c44f2cbf\") " Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.526689 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0525cc76-435b-409d-ad19-cc48c44f2cbf-dns-svc\") pod \"0525cc76-435b-409d-ad19-cc48c44f2cbf\" (UID: \"0525cc76-435b-409d-ad19-cc48c44f2cbf\") " Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.527589 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8lmb\" (UniqueName: \"kubernetes.io/projected/0525cc76-435b-409d-ad19-cc48c44f2cbf-kube-api-access-v8lmb\") pod \"0525cc76-435b-409d-ad19-cc48c44f2cbf\" (UID: \"0525cc76-435b-409d-ad19-cc48c44f2cbf\") " Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.548011 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0525cc76-435b-409d-ad19-cc48c44f2cbf-kube-api-access-v8lmb" (OuterVolumeSpecName: "kube-api-access-v8lmb") pod "0525cc76-435b-409d-ad19-cc48c44f2cbf" (UID: "0525cc76-435b-409d-ad19-cc48c44f2cbf"). InnerVolumeSpecName "kube-api-access-v8lmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.552851 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0525cc76-435b-409d-ad19-cc48c44f2cbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0525cc76-435b-409d-ad19-cc48c44f2cbf" (UID: "0525cc76-435b-409d-ad19-cc48c44f2cbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.560086 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0525cc76-435b-409d-ad19-cc48c44f2cbf-config" (OuterVolumeSpecName: "config") pod "0525cc76-435b-409d-ad19-cc48c44f2cbf" (UID: "0525cc76-435b-409d-ad19-cc48c44f2cbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.598765 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f82s2"] Feb 23 18:49:06 crc kubenswrapper[4768]: W0223 18:49:06.603058 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6aa0c761_4d02_416e_bd62_af70bbf8a593.slice/crio-c8e507c2d667158a7edc2ee70dc8668077295ca9babadb8a2c85c5360970dd55 WatchSource:0}: Error finding container c8e507c2d667158a7edc2ee70dc8668077295ca9babadb8a2c85c5360970dd55: Status 404 returned error can't find the container with id c8e507c2d667158a7edc2ee70dc8668077295ca9babadb8a2c85c5360970dd55 Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.629817 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8lmb\" (UniqueName: \"kubernetes.io/projected/0525cc76-435b-409d-ad19-cc48c44f2cbf-kube-api-access-v8lmb\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.629845 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0525cc76-435b-409d-ad19-cc48c44f2cbf-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.629855 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0525cc76-435b-409d-ad19-cc48c44f2cbf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.724318 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-g5jph"] Feb 23 18:49:06 crc kubenswrapper[4768]: W0223 18:49:06.724916 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2216fee_68a6_40fc_b747_a4f7c12a3bae.slice/crio-f441fdedaab1da4cff885fc515dc5ebf214068655d6b30862ddbcaff7608bc56 WatchSource:0}: Error finding container f441fdedaab1da4cff885fc515dc5ebf214068655d6b30862ddbcaff7608bc56: Status 404 returned error can't find the container with id f441fdedaab1da4cff885fc515dc5ebf214068655d6b30862ddbcaff7608bc56 Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.727466 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-c45dt"] Feb 23 18:49:06 crc kubenswrapper[4768]: I0223 18:49:06.998983 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" event={"ID":"c2216fee-68a6-40fc-b747-a4f7c12a3bae","Type":"ContainerStarted","Data":"f441fdedaab1da4cff885fc515dc5ebf214068655d6b30862ddbcaff7608bc56"} Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.001494 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ffe1b163-3686-4036-8f27-a4b600234d8a","Type":"ContainerStarted","Data":"80b95ba7131a322508d63926e84d97516ead9a5a1d857941d0e426e4e3513d82"} Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.002888 4768 generic.go:334] "Generic (PLEG): container finished" podID="6aa0c761-4d02-416e-bd62-af70bbf8a593" containerID="1ca63cd057892b64dc65e3b58c1fefa9ab183275b385c0364d89924e1dd7f4d6" exitCode=0 Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.002944 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" event={"ID":"6aa0c761-4d02-416e-bd62-af70bbf8a593","Type":"ContainerDied","Data":"1ca63cd057892b64dc65e3b58c1fefa9ab183275b385c0364d89924e1dd7f4d6"} Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.002969 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" event={"ID":"6aa0c761-4d02-416e-bd62-af70bbf8a593","Type":"ContainerStarted","Data":"c8e507c2d667158a7edc2ee70dc8668077295ca9babadb8a2c85c5360970dd55"} Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.005087 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-c45dt" event={"ID":"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c","Type":"ContainerStarted","Data":"3a1487eae5d6e0d9e867dfa3930d24d31379a2c1f73057bd09e34afbd91fa83f"} Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.006789 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" event={"ID":"0525cc76-435b-409d-ad19-cc48c44f2cbf","Type":"ContainerDied","Data":"7591b13ebc7d7fdeb5df728651a7a73b106dff38b385d066dcdc58363277165c"} Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.006846 4768 scope.go:117] "RemoveContainer" containerID="2559cc0f8ea08d097891c6e72d0ba0eb2794668773df8d688c25d8d5c16e4f80" Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.006888 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ztvvk" Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.121759 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ztvvk"] Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.147902 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ztvvk"] Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.319147 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0525cc76-435b-409d-ad19-cc48c44f2cbf" path="/var/lib/kubelet/pods/0525cc76-435b-409d-ad19-cc48c44f2cbf/volumes" Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.320205 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d4b920a-fcdc-4f02-96a2-6ee2dd23601f" path="/var/lib/kubelet/pods/2d4b920a-fcdc-4f02-96a2-6ee2dd23601f/volumes" Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.779683 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 23 18:49:07 crc kubenswrapper[4768]: I0223 18:49:07.905050 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.014587 4768 generic.go:334] "Generic (PLEG): container finished" podID="c2216fee-68a6-40fc-b747-a4f7c12a3bae" containerID="b5e7d86a4485a99134376aadf24a485ec925223058f591eb96c6f644611d5dff" exitCode=0 Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.014796 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" event={"ID":"c2216fee-68a6-40fc-b747-a4f7c12a3bae","Type":"ContainerDied","Data":"b5e7d86a4485a99134376aadf24a485ec925223058f591eb96c6f644611d5dff"} Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.017034 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ffe1b163-3686-4036-8f27-a4b600234d8a","Type":"ContainerStarted","Data":"b62469b1d44978f778a33b0259f97337af868e7d8442ad29db9fd2eeb06a1903"} Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.017101 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"ffe1b163-3686-4036-8f27-a4b600234d8a","Type":"ContainerStarted","Data":"e3ffaaeb62bb989b4547258742d600ab001485946e0ee3dac8b6aa069f1d3cf9"} Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.017261 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.018667 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" event={"ID":"6aa0c761-4d02-416e-bd62-af70bbf8a593","Type":"ContainerStarted","Data":"1de54e134cc624496b623e303cc0882d9708847725e1e821444466860d477065"} Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.019432 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.020967 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-c45dt" event={"ID":"1c73dca1-1a57-4c3a-8337-dba75d7e7b9c","Type":"ContainerStarted","Data":"236f36d06118fc453d8a32066a42bb452d79d3a63e770f74d2a2dd9217404f66"} Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.057900 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.897868609 podStartE2EDuration="3.057876147s" podCreationTimestamp="2026-02-23 18:49:05 +0000 UTC" firstStartedPulling="2026-02-23 18:49:06.411365552 +0000 UTC m=+941.801851352" lastFinishedPulling="2026-02-23 18:49:07.57137305 +0000 UTC m=+942.961858890" observedRunningTime="2026-02-23 18:49:08.052224452 +0000 UTC m=+943.442710252" watchObservedRunningTime="2026-02-23 18:49:08.057876147 +0000 UTC m=+943.448361947" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.108091 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" podStartSLOduration=3.108069323 podStartE2EDuration="3.108069323s" podCreationTimestamp="2026-02-23 18:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:49:08.081546965 +0000 UTC m=+943.472032765" watchObservedRunningTime="2026-02-23 18:49:08.108069323 +0000 UTC m=+943.498555123" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.115203 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-c45dt" podStartSLOduration=3.115185187 podStartE2EDuration="3.115185187s" podCreationTimestamp="2026-02-23 18:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:49:08.098073058 +0000 UTC m=+943.488558868" watchObservedRunningTime="2026-02-23 18:49:08.115185187 +0000 UTC m=+943.505670987" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.491986 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.703534 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f82s2"] Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.730462 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-mclnm"] Feb 23 18:49:08 crc kubenswrapper[4768]: E0223 18:49:08.730778 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0525cc76-435b-409d-ad19-cc48c44f2cbf" containerName="init" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.730794 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="0525cc76-435b-409d-ad19-cc48c44f2cbf" containerName="init" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.730968 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="0525cc76-435b-409d-ad19-cc48c44f2cbf" containerName="init" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.731690 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.764044 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mclnm"] Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.844452 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.844505 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-dns-svc\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.844566 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-config\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.844610 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.844634 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v77f7\" (UniqueName: \"kubernetes.io/projected/5823e392-a97a-4f29-a8a4-3dbfeb426417-kube-api-access-v77f7\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.947087 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.947178 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v77f7\" (UniqueName: \"kubernetes.io/projected/5823e392-a97a-4f29-a8a4-3dbfeb426417-kube-api-access-v77f7\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.947231 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.947286 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-dns-svc\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.947351 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-config\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.948576 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-config\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.948667 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.948764 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-dns-svc\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.949294 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:08 crc kubenswrapper[4768]: I0223 18:49:08.971025 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v77f7\" (UniqueName: \"kubernetes.io/projected/5823e392-a97a-4f29-a8a4-3dbfeb426417-kube-api-access-v77f7\") pod \"dnsmasq-dns-698758b865-mclnm\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.031720 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" event={"ID":"c2216fee-68a6-40fc-b747-a4f7c12a3bae","Type":"ContainerStarted","Data":"a220884a55aa97961e1b8dabd6b3082feff75ffa56e5645e11c335957f561702"} Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.049030 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.058984 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" podStartSLOduration=4.058963038 podStartE2EDuration="4.058963038s" podCreationTimestamp="2026-02-23 18:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:49:09.056815989 +0000 UTC m=+944.447301809" watchObservedRunningTime="2026-02-23 18:49:09.058963038 +0000 UTC m=+944.449448838" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.535530 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mclnm"] Feb 23 18:49:09 crc kubenswrapper[4768]: W0223 18:49:09.541485 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5823e392_a97a_4f29_a8a4_3dbfeb426417.slice/crio-de2535d5dda74f461f68d39908f6fe71eca86cadef7aef3a2f3a656bc21f44d0 WatchSource:0}: Error finding container de2535d5dda74f461f68d39908f6fe71eca86cadef7aef3a2f3a656bc21f44d0: Status 404 returned error can't find the container with id de2535d5dda74f461f68d39908f6fe71eca86cadef7aef3a2f3a656bc21f44d0 Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.545334 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.545386 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.862497 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.874134 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.879655 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.880017 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-sgx62" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.880234 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.881089 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.906053 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.969779 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v44rq\" (UniqueName: \"kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-kube-api-access-v44rq\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.969857 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2932248-edbb-4073-8a18-d076462b4201-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.969890 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.969914 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c2932248-edbb-4073-8a18-d076462b4201-cache\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.969999 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:09 crc kubenswrapper[4768]: I0223 18:49:09.970025 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c2932248-edbb-4073-8a18-d076462b4201-lock\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.046476 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mclnm" event={"ID":"5823e392-a97a-4f29-a8a4-3dbfeb426417","Type":"ContainerStarted","Data":"de2535d5dda74f461f68d39908f6fe71eca86cadef7aef3a2f3a656bc21f44d0"} Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.046718 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.046894 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" podUID="6aa0c761-4d02-416e-bd62-af70bbf8a593" containerName="dnsmasq-dns" containerID="cri-o://1de54e134cc624496b623e303cc0882d9708847725e1e821444466860d477065" gracePeriod=10 Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.071131 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2932248-edbb-4073-8a18-d076462b4201-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.071202 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.071232 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c2932248-edbb-4073-8a18-d076462b4201-cache\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.071490 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.071516 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c2932248-edbb-4073-8a18-d076462b4201-lock\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.071555 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v44rq\" (UniqueName: \"kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-kube-api-access-v44rq\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.071942 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.072178 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/c2932248-edbb-4073-8a18-d076462b4201-cache\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.072264 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/c2932248-edbb-4073-8a18-d076462b4201-lock\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.077969 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2932248-edbb-4073-8a18-d076462b4201-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: E0223 18:49:10.076242 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 18:49:10 crc kubenswrapper[4768]: E0223 18:49:10.087492 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 18:49:10 crc kubenswrapper[4768]: E0223 18:49:10.087593 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift podName:c2932248-edbb-4073-8a18-d076462b4201 nodeName:}" failed. No retries permitted until 2026-02-23 18:49:10.587560163 +0000 UTC m=+945.978046153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift") pod "swift-storage-0" (UID: "c2932248-edbb-4073-8a18-d076462b4201") : configmap "swift-ring-files" not found Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.098635 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v44rq\" (UniqueName: \"kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-kube-api-access-v44rq\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.117288 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: I0223 18:49:10.685036 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:10 crc kubenswrapper[4768]: E0223 18:49:10.685181 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 18:49:10 crc kubenswrapper[4768]: E0223 18:49:10.685712 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 18:49:10 crc kubenswrapper[4768]: E0223 18:49:10.685811 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift podName:c2932248-edbb-4073-8a18-d076462b4201 nodeName:}" failed. No retries permitted until 2026-02-23 18:49:11.685785562 +0000 UTC m=+947.076271362 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift") pod "swift-storage-0" (UID: "c2932248-edbb-4073-8a18-d076462b4201") : configmap "swift-ring-files" not found Feb 23 18:49:11 crc kubenswrapper[4768]: I0223 18:49:11.055793 4768 generic.go:334] "Generic (PLEG): container finished" podID="6aa0c761-4d02-416e-bd62-af70bbf8a593" containerID="1de54e134cc624496b623e303cc0882d9708847725e1e821444466860d477065" exitCode=0 Feb 23 18:49:11 crc kubenswrapper[4768]: I0223 18:49:11.055871 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" event={"ID":"6aa0c761-4d02-416e-bd62-af70bbf8a593","Type":"ContainerDied","Data":"1de54e134cc624496b623e303cc0882d9708847725e1e821444466860d477065"} Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:11.715762 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:15 crc kubenswrapper[4768]: E0223 18:49:11.715962 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 18:49:15 crc kubenswrapper[4768]: E0223 18:49:11.716113 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 18:49:15 crc kubenswrapper[4768]: E0223 18:49:11.716160 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift podName:c2932248-edbb-4073-8a18-d076462b4201 nodeName:}" failed. No retries permitted until 2026-02-23 18:49:13.716144636 +0000 UTC m=+949.106630436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift") pod "swift-storage-0" (UID: "c2932248-edbb-4073-8a18-d076462b4201") : configmap "swift-ring-files" not found Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.768935 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-9nswb"] Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.769918 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.777049 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.778617 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.787561 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:15 crc kubenswrapper[4768]: E0223 18:49:13.787710 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 18:49:15 crc kubenswrapper[4768]: E0223 18:49:13.787781 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 18:49:15 crc kubenswrapper[4768]: E0223 18:49:13.787827 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift podName:c2932248-edbb-4073-8a18-d076462b4201 nodeName:}" failed. No retries permitted until 2026-02-23 18:49:17.787810484 +0000 UTC m=+953.178296284 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift") pod "swift-storage-0" (UID: "c2932248-edbb-4073-8a18-d076462b4201") : configmap "swift-ring-files" not found Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.787755 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.791785 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9nswb"] Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.888876 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-combined-ca-bundle\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.889475 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-swiftconf\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.889526 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1ddb02d3-f5a2-4681-90fe-4d5572fed381-etc-swift\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.889547 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1ddb02d3-f5a2-4681-90fe-4d5572fed381-ring-data-devices\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.889568 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhj8p\" (UniqueName: \"kubernetes.io/projected/1ddb02d3-f5a2-4681-90fe-4d5572fed381-kube-api-access-zhj8p\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.889764 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-dispersionconf\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.889866 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ddb02d3-f5a2-4681-90fe-4d5572fed381-scripts\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.991908 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-swiftconf\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.991993 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1ddb02d3-f5a2-4681-90fe-4d5572fed381-etc-swift\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.992017 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1ddb02d3-f5a2-4681-90fe-4d5572fed381-ring-data-devices\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.992042 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhj8p\" (UniqueName: \"kubernetes.io/projected/1ddb02d3-f5a2-4681-90fe-4d5572fed381-kube-api-access-zhj8p\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.992111 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-dispersionconf\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.992150 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ddb02d3-f5a2-4681-90fe-4d5572fed381-scripts\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.992200 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-combined-ca-bundle\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.992792 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1ddb02d3-f5a2-4681-90fe-4d5572fed381-etc-swift\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.993181 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1ddb02d3-f5a2-4681-90fe-4d5572fed381-ring-data-devices\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.993420 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ddb02d3-f5a2-4681-90fe-4d5572fed381-scripts\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.998555 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-dispersionconf\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:13.999191 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-swiftconf\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.003791 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-combined-ca-bundle\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.010268 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhj8p\" (UniqueName: \"kubernetes.io/projected/1ddb02d3-f5a2-4681-90fe-4d5572fed381-kube-api-access-zhj8p\") pod \"swift-ring-rebalance-9nswb\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.085803 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.368300 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4l67m"] Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.371686 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4l67m" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.376441 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.380930 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4l67m"] Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.504596 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00e308bd-769e-4df2-8ac6-1a0e15763c1e-operator-scripts\") pod \"root-account-create-update-4l67m\" (UID: \"00e308bd-769e-4df2-8ac6-1a0e15763c1e\") " pod="openstack/root-account-create-update-4l67m" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.504674 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pts44\" (UniqueName: \"kubernetes.io/projected/00e308bd-769e-4df2-8ac6-1a0e15763c1e-kube-api-access-pts44\") pod \"root-account-create-update-4l67m\" (UID: \"00e308bd-769e-4df2-8ac6-1a0e15763c1e\") " pod="openstack/root-account-create-update-4l67m" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.606739 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00e308bd-769e-4df2-8ac6-1a0e15763c1e-operator-scripts\") pod \"root-account-create-update-4l67m\" (UID: \"00e308bd-769e-4df2-8ac6-1a0e15763c1e\") " pod="openstack/root-account-create-update-4l67m" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.606793 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pts44\" (UniqueName: \"kubernetes.io/projected/00e308bd-769e-4df2-8ac6-1a0e15763c1e-kube-api-access-pts44\") pod \"root-account-create-update-4l67m\" (UID: \"00e308bd-769e-4df2-8ac6-1a0e15763c1e\") " pod="openstack/root-account-create-update-4l67m" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.608091 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00e308bd-769e-4df2-8ac6-1a0e15763c1e-operator-scripts\") pod \"root-account-create-update-4l67m\" (UID: \"00e308bd-769e-4df2-8ac6-1a0e15763c1e\") " pod="openstack/root-account-create-update-4l67m" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.635978 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pts44\" (UniqueName: \"kubernetes.io/projected/00e308bd-769e-4df2-8ac6-1a0e15763c1e-kube-api-access-pts44\") pod \"root-account-create-update-4l67m\" (UID: \"00e308bd-769e-4df2-8ac6-1a0e15763c1e\") " pod="openstack/root-account-create-update-4l67m" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:14.701120 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4l67m" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:15.094917 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mclnm" event={"ID":"5823e392-a97a-4f29-a8a4-3dbfeb426417","Type":"ContainerStarted","Data":"0194ceaed58441ba968a3dfbe2745a04807c293da7652b120cac5e8fff96b8e7"} Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:15.747225 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 23 18:49:15 crc kubenswrapper[4768]: I0223 18:49:15.860970 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.018464 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.113803 4768 generic.go:334] "Generic (PLEG): container finished" podID="5823e392-a97a-4f29-a8a4-3dbfeb426417" containerID="0194ceaed58441ba968a3dfbe2745a04807c293da7652b120cac5e8fff96b8e7" exitCode=0 Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.114614 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mclnm" event={"ID":"5823e392-a97a-4f29-a8a4-3dbfeb426417","Type":"ContainerDied","Data":"0194ceaed58441ba968a3dfbe2745a04807c293da7652b120cac5e8fff96b8e7"} Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.210523 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4l67m"] Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.213588 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9nswb"] Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.269482 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rsj29"] Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.271930 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rsj29" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.284223 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rsj29"] Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.344412 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.365308 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a794-account-create-update-wccgz"] Feb 23 18:49:16 crc kubenswrapper[4768]: E0223 18:49:16.365726 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa0c761-4d02-416e-bd62-af70bbf8a593" containerName="init" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.365742 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa0c761-4d02-416e-bd62-af70bbf8a593" containerName="init" Feb 23 18:49:16 crc kubenswrapper[4768]: E0223 18:49:16.365762 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aa0c761-4d02-416e-bd62-af70bbf8a593" containerName="dnsmasq-dns" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.365768 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aa0c761-4d02-416e-bd62-af70bbf8a593" containerName="dnsmasq-dns" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.365933 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aa0c761-4d02-416e-bd62-af70bbf8a593" containerName="dnsmasq-dns" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.379167 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-ovsdbserver-sb\") pod \"6aa0c761-4d02-416e-bd62-af70bbf8a593\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.379234 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-config\") pod \"6aa0c761-4d02-416e-bd62-af70bbf8a593\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.379319 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-dns-svc\") pod \"6aa0c761-4d02-416e-bd62-af70bbf8a593\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.379449 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztxnr\" (UniqueName: \"kubernetes.io/projected/6aa0c761-4d02-416e-bd62-af70bbf8a593-kube-api-access-ztxnr\") pod \"6aa0c761-4d02-416e-bd62-af70bbf8a593\" (UID: \"6aa0c761-4d02-416e-bd62-af70bbf8a593\") " Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.379739 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvz98\" (UniqueName: \"kubernetes.io/projected/e3cff39d-7895-4fa0-ac21-900198443faf-kube-api-access-lvz98\") pod \"glance-db-create-rsj29\" (UID: \"e3cff39d-7895-4fa0-ac21-900198443faf\") " pod="openstack/glance-db-create-rsj29" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.379873 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3cff39d-7895-4fa0-ac21-900198443faf-operator-scripts\") pod \"glance-db-create-rsj29\" (UID: \"e3cff39d-7895-4fa0-ac21-900198443faf\") " pod="openstack/glance-db-create-rsj29" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.379882 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a794-account-create-update-wccgz" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.387747 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.392375 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aa0c761-4d02-416e-bd62-af70bbf8a593-kube-api-access-ztxnr" (OuterVolumeSpecName: "kube-api-access-ztxnr") pod "6aa0c761-4d02-416e-bd62-af70bbf8a593" (UID: "6aa0c761-4d02-416e-bd62-af70bbf8a593"). InnerVolumeSpecName "kube-api-access-ztxnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.400969 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a794-account-create-update-wccgz"] Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.465136 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6aa0c761-4d02-416e-bd62-af70bbf8a593" (UID: "6aa0c761-4d02-416e-bd62-af70bbf8a593"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.470537 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6aa0c761-4d02-416e-bd62-af70bbf8a593" (UID: "6aa0c761-4d02-416e-bd62-af70bbf8a593"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.471222 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-config" (OuterVolumeSpecName: "config") pod "6aa0c761-4d02-416e-bd62-af70bbf8a593" (UID: "6aa0c761-4d02-416e-bd62-af70bbf8a593"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.481353 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvz98\" (UniqueName: \"kubernetes.io/projected/e3cff39d-7895-4fa0-ac21-900198443faf-kube-api-access-lvz98\") pod \"glance-db-create-rsj29\" (UID: \"e3cff39d-7895-4fa0-ac21-900198443faf\") " pod="openstack/glance-db-create-rsj29" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.481428 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3cff39d-7895-4fa0-ac21-900198443faf-operator-scripts\") pod \"glance-db-create-rsj29\" (UID: \"e3cff39d-7895-4fa0-ac21-900198443faf\") " pod="openstack/glance-db-create-rsj29" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.481463 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsm42\" (UniqueName: \"kubernetes.io/projected/45ebc246-b507-4457-a2e3-be3ac8ab0aee-kube-api-access-xsm42\") pod \"glance-a794-account-create-update-wccgz\" (UID: \"45ebc246-b507-4457-a2e3-be3ac8ab0aee\") " pod="openstack/glance-a794-account-create-update-wccgz" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.481535 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ebc246-b507-4457-a2e3-be3ac8ab0aee-operator-scripts\") pod \"glance-a794-account-create-update-wccgz\" (UID: \"45ebc246-b507-4457-a2e3-be3ac8ab0aee\") " pod="openstack/glance-a794-account-create-update-wccgz" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.481840 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.481856 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.481866 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztxnr\" (UniqueName: \"kubernetes.io/projected/6aa0c761-4d02-416e-bd62-af70bbf8a593-kube-api-access-ztxnr\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.481906 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aa0c761-4d02-416e-bd62-af70bbf8a593-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.482462 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3cff39d-7895-4fa0-ac21-900198443faf-operator-scripts\") pod \"glance-db-create-rsj29\" (UID: \"e3cff39d-7895-4fa0-ac21-900198443faf\") " pod="openstack/glance-db-create-rsj29" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.507366 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvz98\" (UniqueName: \"kubernetes.io/projected/e3cff39d-7895-4fa0-ac21-900198443faf-kube-api-access-lvz98\") pod \"glance-db-create-rsj29\" (UID: \"e3cff39d-7895-4fa0-ac21-900198443faf\") " pod="openstack/glance-db-create-rsj29" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.583473 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsm42\" (UniqueName: \"kubernetes.io/projected/45ebc246-b507-4457-a2e3-be3ac8ab0aee-kube-api-access-xsm42\") pod \"glance-a794-account-create-update-wccgz\" (UID: \"45ebc246-b507-4457-a2e3-be3ac8ab0aee\") " pod="openstack/glance-a794-account-create-update-wccgz" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.583542 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ebc246-b507-4457-a2e3-be3ac8ab0aee-operator-scripts\") pod \"glance-a794-account-create-update-wccgz\" (UID: \"45ebc246-b507-4457-a2e3-be3ac8ab0aee\") " pod="openstack/glance-a794-account-create-update-wccgz" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.584251 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ebc246-b507-4457-a2e3-be3ac8ab0aee-operator-scripts\") pod \"glance-a794-account-create-update-wccgz\" (UID: \"45ebc246-b507-4457-a2e3-be3ac8ab0aee\") " pod="openstack/glance-a794-account-create-update-wccgz" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.609125 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsm42\" (UniqueName: \"kubernetes.io/projected/45ebc246-b507-4457-a2e3-be3ac8ab0aee-kube-api-access-xsm42\") pod \"glance-a794-account-create-update-wccgz\" (UID: \"45ebc246-b507-4457-a2e3-be3ac8ab0aee\") " pod="openstack/glance-a794-account-create-update-wccgz" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.684690 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rsj29" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.704930 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a794-account-create-update-wccgz" Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.990399 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-t2chz"] Feb 23 18:49:16 crc kubenswrapper[4768]: I0223 18:49:16.995012 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-t2chz" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.010096 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-t2chz"] Feb 23 18:49:17 crc kubenswrapper[4768]: W0223 18:49:17.057396 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3cff39d_7895_4fa0_ac21_900198443faf.slice/crio-0767502837f3cfefe63ed750ce5a16334625c269d286c6d4eb5b3d845709149f WatchSource:0}: Error finding container 0767502837f3cfefe63ed750ce5a16334625c269d286c6d4eb5b3d845709149f: Status 404 returned error can't find the container with id 0767502837f3cfefe63ed750ce5a16334625c269d286c6d4eb5b3d845709149f Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.079466 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rsj29"] Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.097514 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bnch\" (UniqueName: \"kubernetes.io/projected/dd209e6b-7e31-4061-9cfe-2bfa6b279c76-kube-api-access-6bnch\") pod \"keystone-db-create-t2chz\" (UID: \"dd209e6b-7e31-4061-9cfe-2bfa6b279c76\") " pod="openstack/keystone-db-create-t2chz" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.097625 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd209e6b-7e31-4061-9cfe-2bfa6b279c76-operator-scripts\") pod \"keystone-db-create-t2chz\" (UID: \"dd209e6b-7e31-4061-9cfe-2bfa6b279c76\") " pod="openstack/keystone-db-create-t2chz" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.098296 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-05ea-account-create-update-8q99w"] Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.099705 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-05ea-account-create-update-8q99w" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.102643 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.115690 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-05ea-account-create-update-8q99w"] Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.126222 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" event={"ID":"6aa0c761-4d02-416e-bd62-af70bbf8a593","Type":"ContainerDied","Data":"c8e507c2d667158a7edc2ee70dc8668077295ca9babadb8a2c85c5360970dd55"} Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.126272 4768 scope.go:117] "RemoveContainer" containerID="1de54e134cc624496b623e303cc0882d9708847725e1e821444466860d477065" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.126371 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.128722 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rsj29" event={"ID":"e3cff39d-7895-4fa0-ac21-900198443faf","Type":"ContainerStarted","Data":"0767502837f3cfefe63ed750ce5a16334625c269d286c6d4eb5b3d845709149f"} Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.149977 4768 generic.go:334] "Generic (PLEG): container finished" podID="00e308bd-769e-4df2-8ac6-1a0e15763c1e" containerID="861026f797e844d6e86a3e0b73a0016d3fab7399ae6f82d8aad40e6d60de1847" exitCode=0 Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.150051 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4l67m" event={"ID":"00e308bd-769e-4df2-8ac6-1a0e15763c1e","Type":"ContainerDied","Data":"861026f797e844d6e86a3e0b73a0016d3fab7399ae6f82d8aad40e6d60de1847"} Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.150096 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4l67m" event={"ID":"00e308bd-769e-4df2-8ac6-1a0e15763c1e","Type":"ContainerStarted","Data":"8f19f12e5582f3b517c0fdfe6ec5baa3f330447e0037968b188475b2c15f3855"} Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.153866 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9nswb" event={"ID":"1ddb02d3-f5a2-4681-90fe-4d5572fed381","Type":"ContainerStarted","Data":"8c01dd63fc22d75f57018f511e302ae41a55641171eaabef31bc1de88a0855c7"} Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.167548 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mclnm" event={"ID":"5823e392-a97a-4f29-a8a4-3dbfeb426417","Type":"ContainerStarted","Data":"769f09a4884901dcd170f703ef0fd99d2cddcb08648f93f84af02f99099c5c65"} Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.168543 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.200479 4768 scope.go:117] "RemoveContainer" containerID="1ca63cd057892b64dc65e3b58c1fefa9ab183275b385c0364d89924e1dd7f4d6" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.201484 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bnch\" (UniqueName: \"kubernetes.io/projected/dd209e6b-7e31-4061-9cfe-2bfa6b279c76-kube-api-access-6bnch\") pod \"keystone-db-create-t2chz\" (UID: \"dd209e6b-7e31-4061-9cfe-2bfa6b279c76\") " pod="openstack/keystone-db-create-t2chz" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.201532 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgbr7\" (UniqueName: \"kubernetes.io/projected/4d54474d-4da4-4a70-8505-4cee013ef52a-kube-api-access-kgbr7\") pod \"keystone-05ea-account-create-update-8q99w\" (UID: \"4d54474d-4da4-4a70-8505-4cee013ef52a\") " pod="openstack/keystone-05ea-account-create-update-8q99w" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.201563 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d54474d-4da4-4a70-8505-4cee013ef52a-operator-scripts\") pod \"keystone-05ea-account-create-update-8q99w\" (UID: \"4d54474d-4da4-4a70-8505-4cee013ef52a\") " pod="openstack/keystone-05ea-account-create-update-8q99w" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.201599 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd209e6b-7e31-4061-9cfe-2bfa6b279c76-operator-scripts\") pod \"keystone-db-create-t2chz\" (UID: \"dd209e6b-7e31-4061-9cfe-2bfa6b279c76\") " pod="openstack/keystone-db-create-t2chz" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.202157 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd209e6b-7e31-4061-9cfe-2bfa6b279c76-operator-scripts\") pod \"keystone-db-create-t2chz\" (UID: \"dd209e6b-7e31-4061-9cfe-2bfa6b279c76\") " pod="openstack/keystone-db-create-t2chz" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.234892 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-mclnm" podStartSLOduration=9.234875514 podStartE2EDuration="9.234875514s" podCreationTimestamp="2026-02-23 18:49:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:49:17.200846861 +0000 UTC m=+952.591332661" watchObservedRunningTime="2026-02-23 18:49:17.234875514 +0000 UTC m=+952.625361314" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.236478 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f82s2"] Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.239943 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bnch\" (UniqueName: \"kubernetes.io/projected/dd209e6b-7e31-4061-9cfe-2bfa6b279c76-kube-api-access-6bnch\") pod \"keystone-db-create-t2chz\" (UID: \"dd209e6b-7e31-4061-9cfe-2bfa6b279c76\") " pod="openstack/keystone-db-create-t2chz" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.247381 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-f82s2"] Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.299099 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-nzx66"] Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.300333 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nzx66" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.304515 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgbr7\" (UniqueName: \"kubernetes.io/projected/4d54474d-4da4-4a70-8505-4cee013ef52a-kube-api-access-kgbr7\") pod \"keystone-05ea-account-create-update-8q99w\" (UID: \"4d54474d-4da4-4a70-8505-4cee013ef52a\") " pod="openstack/keystone-05ea-account-create-update-8q99w" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.304592 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d54474d-4da4-4a70-8505-4cee013ef52a-operator-scripts\") pod \"keystone-05ea-account-create-update-8q99w\" (UID: \"4d54474d-4da4-4a70-8505-4cee013ef52a\") " pod="openstack/keystone-05ea-account-create-update-8q99w" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.308092 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d54474d-4da4-4a70-8505-4cee013ef52a-operator-scripts\") pod \"keystone-05ea-account-create-update-8q99w\" (UID: \"4d54474d-4da4-4a70-8505-4cee013ef52a\") " pod="openstack/keystone-05ea-account-create-update-8q99w" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.323348 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aa0c761-4d02-416e-bd62-af70bbf8a593" path="/var/lib/kubelet/pods/6aa0c761-4d02-416e-bd62-af70bbf8a593/volumes" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.323926 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4ba9-account-create-update-8w72x"] Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.348999 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nzx66"] Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.349392 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4ba9-account-create-update-8w72x"] Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.349484 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4ba9-account-create-update-8w72x" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.356462 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-t2chz" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.361308 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgbr7\" (UniqueName: \"kubernetes.io/projected/4d54474d-4da4-4a70-8505-4cee013ef52a-kube-api-access-kgbr7\") pod \"keystone-05ea-account-create-update-8q99w\" (UID: \"4d54474d-4da4-4a70-8505-4cee013ef52a\") " pod="openstack/keystone-05ea-account-create-update-8q99w" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.364751 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.380129 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a794-account-create-update-wccgz"] Feb 23 18:49:17 crc kubenswrapper[4768]: W0223 18:49:17.401822 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45ebc246_b507_4457_a2e3_be3ac8ab0aee.slice/crio-6975e043c4ca35f1422eab9b99b345c716b5be7af20a46ef85680f482c929925 WatchSource:0}: Error finding container 6975e043c4ca35f1422eab9b99b345c716b5be7af20a46ef85680f482c929925: Status 404 returned error can't find the container with id 6975e043c4ca35f1422eab9b99b345c716b5be7af20a46ef85680f482c929925 Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.412844 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c4f6d48-0066-49cb-976b-03567c12faa5-operator-scripts\") pod \"placement-db-create-nzx66\" (UID: \"8c4f6d48-0066-49cb-976b-03567c12faa5\") " pod="openstack/placement-db-create-nzx66" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.412931 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f225be18-66fe-405e-890e-51d17f889971-operator-scripts\") pod \"placement-4ba9-account-create-update-8w72x\" (UID: \"f225be18-66fe-405e-890e-51d17f889971\") " pod="openstack/placement-4ba9-account-create-update-8w72x" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.413041 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zsn\" (UniqueName: \"kubernetes.io/projected/f225be18-66fe-405e-890e-51d17f889971-kube-api-access-r5zsn\") pod \"placement-4ba9-account-create-update-8w72x\" (UID: \"f225be18-66fe-405e-890e-51d17f889971\") " pod="openstack/placement-4ba9-account-create-update-8w72x" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.413071 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whz8b\" (UniqueName: \"kubernetes.io/projected/8c4f6d48-0066-49cb-976b-03567c12faa5-kube-api-access-whz8b\") pod \"placement-db-create-nzx66\" (UID: \"8c4f6d48-0066-49cb-976b-03567c12faa5\") " pod="openstack/placement-db-create-nzx66" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.418424 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-05ea-account-create-update-8q99w" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.514903 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c4f6d48-0066-49cb-976b-03567c12faa5-operator-scripts\") pod \"placement-db-create-nzx66\" (UID: \"8c4f6d48-0066-49cb-976b-03567c12faa5\") " pod="openstack/placement-db-create-nzx66" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.514956 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f225be18-66fe-405e-890e-51d17f889971-operator-scripts\") pod \"placement-4ba9-account-create-update-8w72x\" (UID: \"f225be18-66fe-405e-890e-51d17f889971\") " pod="openstack/placement-4ba9-account-create-update-8w72x" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.515046 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zsn\" (UniqueName: \"kubernetes.io/projected/f225be18-66fe-405e-890e-51d17f889971-kube-api-access-r5zsn\") pod \"placement-4ba9-account-create-update-8w72x\" (UID: \"f225be18-66fe-405e-890e-51d17f889971\") " pod="openstack/placement-4ba9-account-create-update-8w72x" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.515072 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whz8b\" (UniqueName: \"kubernetes.io/projected/8c4f6d48-0066-49cb-976b-03567c12faa5-kube-api-access-whz8b\") pod \"placement-db-create-nzx66\" (UID: \"8c4f6d48-0066-49cb-976b-03567c12faa5\") " pod="openstack/placement-db-create-nzx66" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.517318 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f225be18-66fe-405e-890e-51d17f889971-operator-scripts\") pod \"placement-4ba9-account-create-update-8w72x\" (UID: \"f225be18-66fe-405e-890e-51d17f889971\") " pod="openstack/placement-4ba9-account-create-update-8w72x" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.517456 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c4f6d48-0066-49cb-976b-03567c12faa5-operator-scripts\") pod \"placement-db-create-nzx66\" (UID: \"8c4f6d48-0066-49cb-976b-03567c12faa5\") " pod="openstack/placement-db-create-nzx66" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.548694 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zsn\" (UniqueName: \"kubernetes.io/projected/f225be18-66fe-405e-890e-51d17f889971-kube-api-access-r5zsn\") pod \"placement-4ba9-account-create-update-8w72x\" (UID: \"f225be18-66fe-405e-890e-51d17f889971\") " pod="openstack/placement-4ba9-account-create-update-8w72x" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.557516 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whz8b\" (UniqueName: \"kubernetes.io/projected/8c4f6d48-0066-49cb-976b-03567c12faa5-kube-api-access-whz8b\") pod \"placement-db-create-nzx66\" (UID: \"8c4f6d48-0066-49cb-976b-03567c12faa5\") " pod="openstack/placement-db-create-nzx66" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.680011 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nzx66" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.690468 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4ba9-account-create-update-8w72x" Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.824133 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:17 crc kubenswrapper[4768]: E0223 18:49:17.824377 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 18:49:17 crc kubenswrapper[4768]: E0223 18:49:17.824394 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 18:49:17 crc kubenswrapper[4768]: E0223 18:49:17.824443 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift podName:c2932248-edbb-4073-8a18-d076462b4201 nodeName:}" failed. No retries permitted until 2026-02-23 18:49:25.824423965 +0000 UTC m=+961.214909765 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift") pod "swift-storage-0" (UID: "c2932248-edbb-4073-8a18-d076462b4201") : configmap "swift-ring-files" not found Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.915135 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-t2chz"] Feb 23 18:49:17 crc kubenswrapper[4768]: I0223 18:49:17.972064 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-05ea-account-create-update-8q99w"] Feb 23 18:49:18 crc kubenswrapper[4768]: I0223 18:49:18.048461 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-nzx66"] Feb 23 18:49:18 crc kubenswrapper[4768]: W0223 18:49:18.074645 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c4f6d48_0066_49cb_976b_03567c12faa5.slice/crio-c71377e488f4c3466f62eb24b6abbd24c78a4e83d29afe8faf12dd5942c228a0 WatchSource:0}: Error finding container c71377e488f4c3466f62eb24b6abbd24c78a4e83d29afe8faf12dd5942c228a0: Status 404 returned error can't find the container with id c71377e488f4c3466f62eb24b6abbd24c78a4e83d29afe8faf12dd5942c228a0 Feb 23 18:49:18 crc kubenswrapper[4768]: I0223 18:49:18.199888 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nzx66" event={"ID":"8c4f6d48-0066-49cb-976b-03567c12faa5","Type":"ContainerStarted","Data":"c71377e488f4c3466f62eb24b6abbd24c78a4e83d29afe8faf12dd5942c228a0"} Feb 23 18:49:18 crc kubenswrapper[4768]: I0223 18:49:18.206141 4768 generic.go:334] "Generic (PLEG): container finished" podID="45ebc246-b507-4457-a2e3-be3ac8ab0aee" containerID="69a91a8d03f8f1668cb256e927f35542f2267992caece74e93ef2a0e4cd6bcc3" exitCode=0 Feb 23 18:49:18 crc kubenswrapper[4768]: I0223 18:49:18.206207 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a794-account-create-update-wccgz" event={"ID":"45ebc246-b507-4457-a2e3-be3ac8ab0aee","Type":"ContainerDied","Data":"69a91a8d03f8f1668cb256e927f35542f2267992caece74e93ef2a0e4cd6bcc3"} Feb 23 18:49:18 crc kubenswrapper[4768]: I0223 18:49:18.206246 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a794-account-create-update-wccgz" event={"ID":"45ebc246-b507-4457-a2e3-be3ac8ab0aee","Type":"ContainerStarted","Data":"6975e043c4ca35f1422eab9b99b345c716b5be7af20a46ef85680f482c929925"} Feb 23 18:49:18 crc kubenswrapper[4768]: I0223 18:49:18.215342 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-05ea-account-create-update-8q99w" event={"ID":"4d54474d-4da4-4a70-8505-4cee013ef52a","Type":"ContainerStarted","Data":"792219f0d00bfc0e2ac6e706b8a39331a89e071f616bcf2dd17dee478fd35f0e"} Feb 23 18:49:18 crc kubenswrapper[4768]: I0223 18:49:18.221230 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-t2chz" event={"ID":"dd209e6b-7e31-4061-9cfe-2bfa6b279c76","Type":"ContainerStarted","Data":"b16f00922f00e7a859af9914637a88a00dc5c69741f91ac31a5b7f947b8fbae5"} Feb 23 18:49:18 crc kubenswrapper[4768]: I0223 18:49:18.231470 4768 generic.go:334] "Generic (PLEG): container finished" podID="e3cff39d-7895-4fa0-ac21-900198443faf" containerID="14adec7acd33fc66a67410556b64fa408160758caa217771fdd1c55cf9c3d7c6" exitCode=0 Feb 23 18:49:18 crc kubenswrapper[4768]: I0223 18:49:18.231597 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rsj29" event={"ID":"e3cff39d-7895-4fa0-ac21-900198443faf","Type":"ContainerDied","Data":"14adec7acd33fc66a67410556b64fa408160758caa217771fdd1c55cf9c3d7c6"} Feb 23 18:49:18 crc kubenswrapper[4768]: I0223 18:49:18.326365 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4ba9-account-create-update-8w72x"] Feb 23 18:49:18 crc kubenswrapper[4768]: E0223 18:49:18.737756 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d54474d_4da4_4a70_8505_4cee013ef52a.slice/crio-conmon-241e453d5d20186226ad1c1a109ad47c3a1d3b0d5d23306d9fe6412494c22095.scope\": RecentStats: unable to find data in memory cache]" Feb 23 18:49:19 crc kubenswrapper[4768]: I0223 18:49:19.243113 4768 generic.go:334] "Generic (PLEG): container finished" podID="dd209e6b-7e31-4061-9cfe-2bfa6b279c76" containerID="980acb98501da422063ed41656c9fbccdf7f1d1c7379ad5ba13d197475767191" exitCode=0 Feb 23 18:49:19 crc kubenswrapper[4768]: I0223 18:49:19.243172 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-t2chz" event={"ID":"dd209e6b-7e31-4061-9cfe-2bfa6b279c76","Type":"ContainerDied","Data":"980acb98501da422063ed41656c9fbccdf7f1d1c7379ad5ba13d197475767191"} Feb 23 18:49:19 crc kubenswrapper[4768]: I0223 18:49:19.245960 4768 generic.go:334] "Generic (PLEG): container finished" podID="8c4f6d48-0066-49cb-976b-03567c12faa5" containerID="260a905565f25427c0e6ced6534920f624adf74de23542e90163c2a07951e183" exitCode=0 Feb 23 18:49:19 crc kubenswrapper[4768]: I0223 18:49:19.246041 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nzx66" event={"ID":"8c4f6d48-0066-49cb-976b-03567c12faa5","Type":"ContainerDied","Data":"260a905565f25427c0e6ced6534920f624adf74de23542e90163c2a07951e183"} Feb 23 18:49:19 crc kubenswrapper[4768]: I0223 18:49:19.268642 4768 generic.go:334] "Generic (PLEG): container finished" podID="4d54474d-4da4-4a70-8505-4cee013ef52a" containerID="241e453d5d20186226ad1c1a109ad47c3a1d3b0d5d23306d9fe6412494c22095" exitCode=0 Feb 23 18:49:19 crc kubenswrapper[4768]: I0223 18:49:19.268998 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-05ea-account-create-update-8q99w" event={"ID":"4d54474d-4da4-4a70-8505-4cee013ef52a","Type":"ContainerDied","Data":"241e453d5d20186226ad1c1a109ad47c3a1d3b0d5d23306d9fe6412494c22095"} Feb 23 18:49:20 crc kubenswrapper[4768]: W0223 18:49:20.691829 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf225be18_66fe_405e_890e_51d17f889971.slice/crio-188773c0f7aeb0d29f36e2e07f21783f7556e56892acfd369b318fd0a884cd8f WatchSource:0}: Error finding container 188773c0f7aeb0d29f36e2e07f21783f7556e56892acfd369b318fd0a884cd8f: Status 404 returned error can't find the container with id 188773c0f7aeb0d29f36e2e07f21783f7556e56892acfd369b318fd0a884cd8f Feb 23 18:49:20 crc kubenswrapper[4768]: I0223 18:49:20.904330 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a794-account-create-update-wccgz" Feb 23 18:49:20 crc kubenswrapper[4768]: I0223 18:49:20.935414 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4l67m" Feb 23 18:49:20 crc kubenswrapper[4768]: I0223 18:49:20.942810 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rsj29" Feb 23 18:49:20 crc kubenswrapper[4768]: I0223 18:49:20.963007 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-t2chz" Feb 23 18:49:20 crc kubenswrapper[4768]: I0223 18:49:20.967432 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nzx66" Feb 23 18:49:20 crc kubenswrapper[4768]: I0223 18:49:20.985607 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f896c8c65-f82s2" podUID="6aa0c761-4d02-416e-bd62-af70bbf8a593" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.106:5353: i/o timeout" Feb 23 18:49:20 crc kubenswrapper[4768]: I0223 18:49:20.994561 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-05ea-account-create-update-8q99w" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.002191 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pts44\" (UniqueName: \"kubernetes.io/projected/00e308bd-769e-4df2-8ac6-1a0e15763c1e-kube-api-access-pts44\") pod \"00e308bd-769e-4df2-8ac6-1a0e15763c1e\" (UID: \"00e308bd-769e-4df2-8ac6-1a0e15763c1e\") " Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.002263 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsm42\" (UniqueName: \"kubernetes.io/projected/45ebc246-b507-4457-a2e3-be3ac8ab0aee-kube-api-access-xsm42\") pod \"45ebc246-b507-4457-a2e3-be3ac8ab0aee\" (UID: \"45ebc246-b507-4457-a2e3-be3ac8ab0aee\") " Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.002317 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ebc246-b507-4457-a2e3-be3ac8ab0aee-operator-scripts\") pod \"45ebc246-b507-4457-a2e3-be3ac8ab0aee\" (UID: \"45ebc246-b507-4457-a2e3-be3ac8ab0aee\") " Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.002348 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3cff39d-7895-4fa0-ac21-900198443faf-operator-scripts\") pod \"e3cff39d-7895-4fa0-ac21-900198443faf\" (UID: \"e3cff39d-7895-4fa0-ac21-900198443faf\") " Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.002398 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvz98\" (UniqueName: \"kubernetes.io/projected/e3cff39d-7895-4fa0-ac21-900198443faf-kube-api-access-lvz98\") pod \"e3cff39d-7895-4fa0-ac21-900198443faf\" (UID: \"e3cff39d-7895-4fa0-ac21-900198443faf\") " Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.002459 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00e308bd-769e-4df2-8ac6-1a0e15763c1e-operator-scripts\") pod \"00e308bd-769e-4df2-8ac6-1a0e15763c1e\" (UID: \"00e308bd-769e-4df2-8ac6-1a0e15763c1e\") " Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.003399 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45ebc246-b507-4457-a2e3-be3ac8ab0aee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "45ebc246-b507-4457-a2e3-be3ac8ab0aee" (UID: "45ebc246-b507-4457-a2e3-be3ac8ab0aee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.003396 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3cff39d-7895-4fa0-ac21-900198443faf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e3cff39d-7895-4fa0-ac21-900198443faf" (UID: "e3cff39d-7895-4fa0-ac21-900198443faf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.003871 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00e308bd-769e-4df2-8ac6-1a0e15763c1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00e308bd-769e-4df2-8ac6-1a0e15763c1e" (UID: "00e308bd-769e-4df2-8ac6-1a0e15763c1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.012995 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ebc246-b507-4457-a2e3-be3ac8ab0aee-kube-api-access-xsm42" (OuterVolumeSpecName: "kube-api-access-xsm42") pod "45ebc246-b507-4457-a2e3-be3ac8ab0aee" (UID: "45ebc246-b507-4457-a2e3-be3ac8ab0aee"). InnerVolumeSpecName "kube-api-access-xsm42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.013648 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3cff39d-7895-4fa0-ac21-900198443faf-kube-api-access-lvz98" (OuterVolumeSpecName: "kube-api-access-lvz98") pod "e3cff39d-7895-4fa0-ac21-900198443faf" (UID: "e3cff39d-7895-4fa0-ac21-900198443faf"). InnerVolumeSpecName "kube-api-access-lvz98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.014905 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00e308bd-769e-4df2-8ac6-1a0e15763c1e-kube-api-access-pts44" (OuterVolumeSpecName: "kube-api-access-pts44") pod "00e308bd-769e-4df2-8ac6-1a0e15763c1e" (UID: "00e308bd-769e-4df2-8ac6-1a0e15763c1e"). InnerVolumeSpecName "kube-api-access-pts44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104108 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd209e6b-7e31-4061-9cfe-2bfa6b279c76-operator-scripts\") pod \"dd209e6b-7e31-4061-9cfe-2bfa6b279c76\" (UID: \"dd209e6b-7e31-4061-9cfe-2bfa6b279c76\") " Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104195 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c4f6d48-0066-49cb-976b-03567c12faa5-operator-scripts\") pod \"8c4f6d48-0066-49cb-976b-03567c12faa5\" (UID: \"8c4f6d48-0066-49cb-976b-03567c12faa5\") " Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104219 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d54474d-4da4-4a70-8505-4cee013ef52a-operator-scripts\") pod \"4d54474d-4da4-4a70-8505-4cee013ef52a\" (UID: \"4d54474d-4da4-4a70-8505-4cee013ef52a\") " Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104295 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whz8b\" (UniqueName: \"kubernetes.io/projected/8c4f6d48-0066-49cb-976b-03567c12faa5-kube-api-access-whz8b\") pod \"8c4f6d48-0066-49cb-976b-03567c12faa5\" (UID: \"8c4f6d48-0066-49cb-976b-03567c12faa5\") " Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104328 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgbr7\" (UniqueName: \"kubernetes.io/projected/4d54474d-4da4-4a70-8505-4cee013ef52a-kube-api-access-kgbr7\") pod \"4d54474d-4da4-4a70-8505-4cee013ef52a\" (UID: \"4d54474d-4da4-4a70-8505-4cee013ef52a\") " Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104366 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bnch\" (UniqueName: \"kubernetes.io/projected/dd209e6b-7e31-4061-9cfe-2bfa6b279c76-kube-api-access-6bnch\") pod \"dd209e6b-7e31-4061-9cfe-2bfa6b279c76\" (UID: \"dd209e6b-7e31-4061-9cfe-2bfa6b279c76\") " Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104669 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd209e6b-7e31-4061-9cfe-2bfa6b279c76-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd209e6b-7e31-4061-9cfe-2bfa6b279c76" (UID: "dd209e6b-7e31-4061-9cfe-2bfa6b279c76"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104735 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c4f6d48-0066-49cb-976b-03567c12faa5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c4f6d48-0066-49cb-976b-03567c12faa5" (UID: "8c4f6d48-0066-49cb-976b-03567c12faa5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104801 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00e308bd-769e-4df2-8ac6-1a0e15763c1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104815 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pts44\" (UniqueName: \"kubernetes.io/projected/00e308bd-769e-4df2-8ac6-1a0e15763c1e-kube-api-access-pts44\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104826 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsm42\" (UniqueName: \"kubernetes.io/projected/45ebc246-b507-4457-a2e3-be3ac8ab0aee-kube-api-access-xsm42\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104836 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd209e6b-7e31-4061-9cfe-2bfa6b279c76-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104844 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/45ebc246-b507-4457-a2e3-be3ac8ab0aee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104855 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e3cff39d-7895-4fa0-ac21-900198443faf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.104865 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvz98\" (UniqueName: \"kubernetes.io/projected/e3cff39d-7895-4fa0-ac21-900198443faf-kube-api-access-lvz98\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.105055 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d54474d-4da4-4a70-8505-4cee013ef52a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d54474d-4da4-4a70-8505-4cee013ef52a" (UID: "4d54474d-4da4-4a70-8505-4cee013ef52a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.109173 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd209e6b-7e31-4061-9cfe-2bfa6b279c76-kube-api-access-6bnch" (OuterVolumeSpecName: "kube-api-access-6bnch") pod "dd209e6b-7e31-4061-9cfe-2bfa6b279c76" (UID: "dd209e6b-7e31-4061-9cfe-2bfa6b279c76"). InnerVolumeSpecName "kube-api-access-6bnch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.110127 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d54474d-4da4-4a70-8505-4cee013ef52a-kube-api-access-kgbr7" (OuterVolumeSpecName: "kube-api-access-kgbr7") pod "4d54474d-4da4-4a70-8505-4cee013ef52a" (UID: "4d54474d-4da4-4a70-8505-4cee013ef52a"). InnerVolumeSpecName "kube-api-access-kgbr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.112764 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c4f6d48-0066-49cb-976b-03567c12faa5-kube-api-access-whz8b" (OuterVolumeSpecName: "kube-api-access-whz8b") pod "8c4f6d48-0066-49cb-976b-03567c12faa5" (UID: "8c4f6d48-0066-49cb-976b-03567c12faa5"). InnerVolumeSpecName "kube-api-access-whz8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.207525 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c4f6d48-0066-49cb-976b-03567c12faa5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.208101 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d54474d-4da4-4a70-8505-4cee013ef52a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.208113 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whz8b\" (UniqueName: \"kubernetes.io/projected/8c4f6d48-0066-49cb-976b-03567c12faa5-kube-api-access-whz8b\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.208126 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgbr7\" (UniqueName: \"kubernetes.io/projected/4d54474d-4da4-4a70-8505-4cee013ef52a-kube-api-access-kgbr7\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.208136 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bnch\" (UniqueName: \"kubernetes.io/projected/dd209e6b-7e31-4061-9cfe-2bfa6b279c76-kube-api-access-6bnch\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.314864 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-nzx66" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.318027 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a794-account-create-update-wccgz" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.320413 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-05ea-account-create-update-8q99w" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.322483 4768 generic.go:334] "Generic (PLEG): container finished" podID="f225be18-66fe-405e-890e-51d17f889971" containerID="7a919a0fab20f4dd5814b43ec678770debc5e111833322911c5ddec9cb8d46a8" exitCode=0 Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.327461 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9nswb" event={"ID":"1ddb02d3-f5a2-4681-90fe-4d5572fed381","Type":"ContainerStarted","Data":"989772dd29deb521ad2f4a6c75ff47ddda63a0aa8db873ae43a53cae6ab774cd"} Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.327537 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-nzx66" event={"ID":"8c4f6d48-0066-49cb-976b-03567c12faa5","Type":"ContainerDied","Data":"c71377e488f4c3466f62eb24b6abbd24c78a4e83d29afe8faf12dd5942c228a0"} Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.327560 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c71377e488f4c3466f62eb24b6abbd24c78a4e83d29afe8faf12dd5942c228a0" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.327574 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a794-account-create-update-wccgz" event={"ID":"45ebc246-b507-4457-a2e3-be3ac8ab0aee","Type":"ContainerDied","Data":"6975e043c4ca35f1422eab9b99b345c716b5be7af20a46ef85680f482c929925"} Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.327587 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6975e043c4ca35f1422eab9b99b345c716b5be7af20a46ef85680f482c929925" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.327597 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-05ea-account-create-update-8q99w" event={"ID":"4d54474d-4da4-4a70-8505-4cee013ef52a","Type":"ContainerDied","Data":"792219f0d00bfc0e2ac6e706b8a39331a89e071f616bcf2dd17dee478fd35f0e"} Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.327610 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="792219f0d00bfc0e2ac6e706b8a39331a89e071f616bcf2dd17dee478fd35f0e" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.327630 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4ba9-account-create-update-8w72x" event={"ID":"f225be18-66fe-405e-890e-51d17f889971","Type":"ContainerDied","Data":"7a919a0fab20f4dd5814b43ec678770debc5e111833322911c5ddec9cb8d46a8"} Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.327644 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4ba9-account-create-update-8w72x" event={"ID":"f225be18-66fe-405e-890e-51d17f889971","Type":"ContainerStarted","Data":"188773c0f7aeb0d29f36e2e07f21783f7556e56892acfd369b318fd0a884cd8f"} Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.336839 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-t2chz" event={"ID":"dd209e6b-7e31-4061-9cfe-2bfa6b279c76","Type":"ContainerDied","Data":"b16f00922f00e7a859af9914637a88a00dc5c69741f91ac31a5b7f947b8fbae5"} Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.336975 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b16f00922f00e7a859af9914637a88a00dc5c69741f91ac31a5b7f947b8fbae5" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.337095 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-t2chz" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.343185 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rsj29" event={"ID":"e3cff39d-7895-4fa0-ac21-900198443faf","Type":"ContainerDied","Data":"0767502837f3cfefe63ed750ce5a16334625c269d286c6d4eb5b3d845709149f"} Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.343226 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0767502837f3cfefe63ed750ce5a16334625c269d286c6d4eb5b3d845709149f" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.343332 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rsj29" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.343705 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-9nswb" podStartSLOduration=3.8132966169999998 podStartE2EDuration="8.343687004s" podCreationTimestamp="2026-02-23 18:49:13 +0000 UTC" firstStartedPulling="2026-02-23 18:49:16.256721801 +0000 UTC m=+951.647207601" lastFinishedPulling="2026-02-23 18:49:20.787112188 +0000 UTC m=+956.177597988" observedRunningTime="2026-02-23 18:49:21.332810656 +0000 UTC m=+956.723296456" watchObservedRunningTime="2026-02-23 18:49:21.343687004 +0000 UTC m=+956.734172804" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.346909 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4l67m" event={"ID":"00e308bd-769e-4df2-8ac6-1a0e15763c1e","Type":"ContainerDied","Data":"8f19f12e5582f3b517c0fdfe6ec5baa3f330447e0037968b188475b2c15f3855"} Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.346942 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f19f12e5582f3b517c0fdfe6ec5baa3f330447e0037968b188475b2c15f3855" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.346999 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4l67m" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.536129 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wh44n"] Feb 23 18:49:21 crc kubenswrapper[4768]: E0223 18:49:21.536547 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3cff39d-7895-4fa0-ac21-900198443faf" containerName="mariadb-database-create" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.536568 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3cff39d-7895-4fa0-ac21-900198443faf" containerName="mariadb-database-create" Feb 23 18:49:21 crc kubenswrapper[4768]: E0223 18:49:21.536584 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00e308bd-769e-4df2-8ac6-1a0e15763c1e" containerName="mariadb-account-create-update" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.536591 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="00e308bd-769e-4df2-8ac6-1a0e15763c1e" containerName="mariadb-account-create-update" Feb 23 18:49:21 crc kubenswrapper[4768]: E0223 18:49:21.536613 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ebc246-b507-4457-a2e3-be3ac8ab0aee" containerName="mariadb-account-create-update" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.536619 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ebc246-b507-4457-a2e3-be3ac8ab0aee" containerName="mariadb-account-create-update" Feb 23 18:49:21 crc kubenswrapper[4768]: E0223 18:49:21.536631 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d54474d-4da4-4a70-8505-4cee013ef52a" containerName="mariadb-account-create-update" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.536637 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d54474d-4da4-4a70-8505-4cee013ef52a" containerName="mariadb-account-create-update" Feb 23 18:49:21 crc kubenswrapper[4768]: E0223 18:49:21.536647 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd209e6b-7e31-4061-9cfe-2bfa6b279c76" containerName="mariadb-database-create" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.536653 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd209e6b-7e31-4061-9cfe-2bfa6b279c76" containerName="mariadb-database-create" Feb 23 18:49:21 crc kubenswrapper[4768]: E0223 18:49:21.536662 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c4f6d48-0066-49cb-976b-03567c12faa5" containerName="mariadb-database-create" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.536669 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c4f6d48-0066-49cb-976b-03567c12faa5" containerName="mariadb-database-create" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.536809 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d54474d-4da4-4a70-8505-4cee013ef52a" containerName="mariadb-account-create-update" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.536822 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="00e308bd-769e-4df2-8ac6-1a0e15763c1e" containerName="mariadb-account-create-update" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.536841 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd209e6b-7e31-4061-9cfe-2bfa6b279c76" containerName="mariadb-database-create" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.536850 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3cff39d-7895-4fa0-ac21-900198443faf" containerName="mariadb-database-create" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.536860 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c4f6d48-0066-49cb-976b-03567c12faa5" containerName="mariadb-database-create" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.536867 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ebc246-b507-4457-a2e3-be3ac8ab0aee" containerName="mariadb-account-create-update" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.538153 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.550752 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wh44n"] Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.620492 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b69ea0-d838-4dcf-be89-7d7385b50387-utilities\") pod \"redhat-operators-wh44n\" (UID: \"b9b69ea0-d838-4dcf-be89-7d7385b50387\") " pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.620982 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b69ea0-d838-4dcf-be89-7d7385b50387-catalog-content\") pod \"redhat-operators-wh44n\" (UID: \"b9b69ea0-d838-4dcf-be89-7d7385b50387\") " pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.621165 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d2ms\" (UniqueName: \"kubernetes.io/projected/b9b69ea0-d838-4dcf-be89-7d7385b50387-kube-api-access-9d2ms\") pod \"redhat-operators-wh44n\" (UID: \"b9b69ea0-d838-4dcf-be89-7d7385b50387\") " pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.722995 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d2ms\" (UniqueName: \"kubernetes.io/projected/b9b69ea0-d838-4dcf-be89-7d7385b50387-kube-api-access-9d2ms\") pod \"redhat-operators-wh44n\" (UID: \"b9b69ea0-d838-4dcf-be89-7d7385b50387\") " pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.723095 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b69ea0-d838-4dcf-be89-7d7385b50387-utilities\") pod \"redhat-operators-wh44n\" (UID: \"b9b69ea0-d838-4dcf-be89-7d7385b50387\") " pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.723170 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b69ea0-d838-4dcf-be89-7d7385b50387-catalog-content\") pod \"redhat-operators-wh44n\" (UID: \"b9b69ea0-d838-4dcf-be89-7d7385b50387\") " pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.723663 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b69ea0-d838-4dcf-be89-7d7385b50387-catalog-content\") pod \"redhat-operators-wh44n\" (UID: \"b9b69ea0-d838-4dcf-be89-7d7385b50387\") " pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.724581 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b69ea0-d838-4dcf-be89-7d7385b50387-utilities\") pod \"redhat-operators-wh44n\" (UID: \"b9b69ea0-d838-4dcf-be89-7d7385b50387\") " pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.743595 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d2ms\" (UniqueName: \"kubernetes.io/projected/b9b69ea0-d838-4dcf-be89-7d7385b50387-kube-api-access-9d2ms\") pod \"redhat-operators-wh44n\" (UID: \"b9b69ea0-d838-4dcf-be89-7d7385b50387\") " pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:21 crc kubenswrapper[4768]: I0223 18:49:21.903500 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:22 crc kubenswrapper[4768]: I0223 18:49:22.411123 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wh44n"] Feb 23 18:49:22 crc kubenswrapper[4768]: I0223 18:49:22.668910 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4ba9-account-create-update-8w72x" Feb 23 18:49:22 crc kubenswrapper[4768]: I0223 18:49:22.748111 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f225be18-66fe-405e-890e-51d17f889971-operator-scripts\") pod \"f225be18-66fe-405e-890e-51d17f889971\" (UID: \"f225be18-66fe-405e-890e-51d17f889971\") " Feb 23 18:49:22 crc kubenswrapper[4768]: I0223 18:49:22.748311 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5zsn\" (UniqueName: \"kubernetes.io/projected/f225be18-66fe-405e-890e-51d17f889971-kube-api-access-r5zsn\") pod \"f225be18-66fe-405e-890e-51d17f889971\" (UID: \"f225be18-66fe-405e-890e-51d17f889971\") " Feb 23 18:49:22 crc kubenswrapper[4768]: I0223 18:49:22.748700 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f225be18-66fe-405e-890e-51d17f889971-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f225be18-66fe-405e-890e-51d17f889971" (UID: "f225be18-66fe-405e-890e-51d17f889971"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:22 crc kubenswrapper[4768]: I0223 18:49:22.748788 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f225be18-66fe-405e-890e-51d17f889971-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:22 crc kubenswrapper[4768]: I0223 18:49:22.756501 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f225be18-66fe-405e-890e-51d17f889971-kube-api-access-r5zsn" (OuterVolumeSpecName: "kube-api-access-r5zsn") pod "f225be18-66fe-405e-890e-51d17f889971" (UID: "f225be18-66fe-405e-890e-51d17f889971"). InnerVolumeSpecName "kube-api-access-r5zsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:22 crc kubenswrapper[4768]: I0223 18:49:22.850776 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5zsn\" (UniqueName: \"kubernetes.io/projected/f225be18-66fe-405e-890e-51d17f889971-kube-api-access-r5zsn\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:22 crc kubenswrapper[4768]: I0223 18:49:22.989246 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4l67m"] Feb 23 18:49:22 crc kubenswrapper[4768]: I0223 18:49:22.993941 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4l67m"] Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.076203 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dqqss"] Feb 23 18:49:23 crc kubenswrapper[4768]: E0223 18:49:23.076600 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f225be18-66fe-405e-890e-51d17f889971" containerName="mariadb-account-create-update" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.076623 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f225be18-66fe-405e-890e-51d17f889971" containerName="mariadb-account-create-update" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.076831 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f225be18-66fe-405e-890e-51d17f889971" containerName="mariadb-account-create-update" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.077555 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dqqss" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.080279 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.089242 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dqqss"] Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.155924 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57f9m\" (UniqueName: \"kubernetes.io/projected/66a7bcb3-6d0f-4dfe-9704-0b506184105a-kube-api-access-57f9m\") pod \"root-account-create-update-dqqss\" (UID: \"66a7bcb3-6d0f-4dfe-9704-0b506184105a\") " pod="openstack/root-account-create-update-dqqss" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.156556 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66a7bcb3-6d0f-4dfe-9704-0b506184105a-operator-scripts\") pod \"root-account-create-update-dqqss\" (UID: \"66a7bcb3-6d0f-4dfe-9704-0b506184105a\") " pod="openstack/root-account-create-update-dqqss" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.258625 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66a7bcb3-6d0f-4dfe-9704-0b506184105a-operator-scripts\") pod \"root-account-create-update-dqqss\" (UID: \"66a7bcb3-6d0f-4dfe-9704-0b506184105a\") " pod="openstack/root-account-create-update-dqqss" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.258699 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57f9m\" (UniqueName: \"kubernetes.io/projected/66a7bcb3-6d0f-4dfe-9704-0b506184105a-kube-api-access-57f9m\") pod \"root-account-create-update-dqqss\" (UID: \"66a7bcb3-6d0f-4dfe-9704-0b506184105a\") " pod="openstack/root-account-create-update-dqqss" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.259956 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66a7bcb3-6d0f-4dfe-9704-0b506184105a-operator-scripts\") pod \"root-account-create-update-dqqss\" (UID: \"66a7bcb3-6d0f-4dfe-9704-0b506184105a\") " pod="openstack/root-account-create-update-dqqss" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.290052 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57f9m\" (UniqueName: \"kubernetes.io/projected/66a7bcb3-6d0f-4dfe-9704-0b506184105a-kube-api-access-57f9m\") pod \"root-account-create-update-dqqss\" (UID: \"66a7bcb3-6d0f-4dfe-9704-0b506184105a\") " pod="openstack/root-account-create-update-dqqss" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.332170 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00e308bd-769e-4df2-8ac6-1a0e15763c1e" path="/var/lib/kubelet/pods/00e308bd-769e-4df2-8ac6-1a0e15763c1e/volumes" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.387545 4768 generic.go:334] "Generic (PLEG): container finished" podID="a52ce7bc-e9a8-474d-87de-598d337bc360" containerID="8950944ed237e1903bb4e956e9e9496fa8c259943744c2c4afe591a90782d9cf" exitCode=0 Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.387632 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a52ce7bc-e9a8-474d-87de-598d337bc360","Type":"ContainerDied","Data":"8950944ed237e1903bb4e956e9e9496fa8c259943744c2c4afe591a90782d9cf"} Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.395190 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dqqss" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.416696 4768 generic.go:334] "Generic (PLEG): container finished" podID="2cb8a262-174b-47ef-adb6-a67384a373f1" containerID="173f567940bc9e7d12562c8311b0e6fa736edc2d7fb93fa710f6fe229b843f1c" exitCode=0 Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.416815 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2cb8a262-174b-47ef-adb6-a67384a373f1","Type":"ContainerDied","Data":"173f567940bc9e7d12562c8311b0e6fa736edc2d7fb93fa710f6fe229b843f1c"} Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.488194 4768 generic.go:334] "Generic (PLEG): container finished" podID="b9b69ea0-d838-4dcf-be89-7d7385b50387" containerID="ccb11c19426071a44bb170dd3112d7f62a4b6d2b997f312150997d682568fea9" exitCode=0 Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.488399 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wh44n" event={"ID":"b9b69ea0-d838-4dcf-be89-7d7385b50387","Type":"ContainerDied","Data":"ccb11c19426071a44bb170dd3112d7f62a4b6d2b997f312150997d682568fea9"} Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.488435 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wh44n" event={"ID":"b9b69ea0-d838-4dcf-be89-7d7385b50387","Type":"ContainerStarted","Data":"c4507e63999d36c8a491c5a99c2a1d619b7f504127600b67d214e008c3a0e680"} Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.491842 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4ba9-account-create-update-8w72x" event={"ID":"f225be18-66fe-405e-890e-51d17f889971","Type":"ContainerDied","Data":"188773c0f7aeb0d29f36e2e07f21783f7556e56892acfd369b318fd0a884cd8f"} Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.491878 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="188773c0f7aeb0d29f36e2e07f21783f7556e56892acfd369b318fd0a884cd8f" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.491968 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4ba9-account-create-update-8w72x" Feb 23 18:49:23 crc kubenswrapper[4768]: I0223 18:49:23.835699 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dqqss"] Feb 23 18:49:23 crc kubenswrapper[4768]: W0223 18:49:23.847150 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66a7bcb3_6d0f_4dfe_9704_0b506184105a.slice/crio-d64ce49987c092998ffd41f9584e24c6b56f4b50e50b1a1582c0de54161aa3e7 WatchSource:0}: Error finding container d64ce49987c092998ffd41f9584e24c6b56f4b50e50b1a1582c0de54161aa3e7: Status 404 returned error can't find the container with id d64ce49987c092998ffd41f9584e24c6b56f4b50e50b1a1582c0de54161aa3e7 Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.051525 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.112582 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-g5jph"] Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.112922 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" podUID="c2216fee-68a6-40fc-b747-a4f7c12a3bae" containerName="dnsmasq-dns" containerID="cri-o://a220884a55aa97961e1b8dabd6b3082feff75ffa56e5645e11c335957f561702" gracePeriod=10 Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.501988 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dqqss" event={"ID":"66a7bcb3-6d0f-4dfe-9704-0b506184105a","Type":"ContainerStarted","Data":"c1c2091e11192e05561807845a68870367f3725dfb72a474d2be078b72b2d602"} Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.502535 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dqqss" event={"ID":"66a7bcb3-6d0f-4dfe-9704-0b506184105a","Type":"ContainerStarted","Data":"d64ce49987c092998ffd41f9584e24c6b56f4b50e50b1a1582c0de54161aa3e7"} Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.506403 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wh44n" event={"ID":"b9b69ea0-d838-4dcf-be89-7d7385b50387","Type":"ContainerStarted","Data":"9fd0facb0bed81cf24660131e9a7b10a9b169d4f76fd95eaf7eb62e3db996109"} Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.510083 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2cb8a262-174b-47ef-adb6-a67384a373f1","Type":"ContainerStarted","Data":"3589f053ff7a7b6f3722b4aff6b3c84b7f65f247aa7483ef514faa599524fd65"} Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.510881 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.522468 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a52ce7bc-e9a8-474d-87de-598d337bc360","Type":"ContainerStarted","Data":"dc99159db18f1bf85e1516936378fe88ec435033a46902b949c8d19a8920befb"} Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.523119 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.525959 4768 generic.go:334] "Generic (PLEG): container finished" podID="c2216fee-68a6-40fc-b747-a4f7c12a3bae" containerID="a220884a55aa97961e1b8dabd6b3082feff75ffa56e5645e11c335957f561702" exitCode=0 Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.526003 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" event={"ID":"c2216fee-68a6-40fc-b747-a4f7c12a3bae","Type":"ContainerDied","Data":"a220884a55aa97961e1b8dabd6b3082feff75ffa56e5645e11c335957f561702"} Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.533939 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-dqqss" podStartSLOduration=1.533919944 podStartE2EDuration="1.533919944s" podCreationTimestamp="2026-02-23 18:49:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:49:24.524832795 +0000 UTC m=+959.915318595" watchObservedRunningTime="2026-02-23 18:49:24.533919944 +0000 UTC m=+959.924405744" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.598468 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.707210715 podStartE2EDuration="53.598453413s" podCreationTimestamp="2026-02-23 18:48:31 +0000 UTC" firstStartedPulling="2026-02-23 18:48:34.008681945 +0000 UTC m=+909.399167745" lastFinishedPulling="2026-02-23 18:48:49.899924643 +0000 UTC m=+925.290410443" observedRunningTime="2026-02-23 18:49:24.594785953 +0000 UTC m=+959.985271763" watchObservedRunningTime="2026-02-23 18:49:24.598453413 +0000 UTC m=+959.988939213" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.634382 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.544597029 podStartE2EDuration="53.634361308s" podCreationTimestamp="2026-02-23 18:48:31 +0000 UTC" firstStartedPulling="2026-02-23 18:48:33.676273074 +0000 UTC m=+909.066758874" lastFinishedPulling="2026-02-23 18:48:49.766037353 +0000 UTC m=+925.156523153" observedRunningTime="2026-02-23 18:49:24.632550078 +0000 UTC m=+960.023035878" watchObservedRunningTime="2026-02-23 18:49:24.634361308 +0000 UTC m=+960.024847098" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.653999 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.737540 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-dns-svc\") pod \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.737992 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9zt9\" (UniqueName: \"kubernetes.io/projected/c2216fee-68a6-40fc-b747-a4f7c12a3bae-kube-api-access-j9zt9\") pod \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.738120 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-ovsdbserver-nb\") pod \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.738188 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-config\") pod \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.738267 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-ovsdbserver-sb\") pod \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\" (UID: \"c2216fee-68a6-40fc-b747-a4f7c12a3bae\") " Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.744633 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2216fee-68a6-40fc-b747-a4f7c12a3bae-kube-api-access-j9zt9" (OuterVolumeSpecName: "kube-api-access-j9zt9") pod "c2216fee-68a6-40fc-b747-a4f7c12a3bae" (UID: "c2216fee-68a6-40fc-b747-a4f7c12a3bae"). InnerVolumeSpecName "kube-api-access-j9zt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.801113 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2216fee-68a6-40fc-b747-a4f7c12a3bae" (UID: "c2216fee-68a6-40fc-b747-a4f7c12a3bae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.826588 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c2216fee-68a6-40fc-b747-a4f7c12a3bae" (UID: "c2216fee-68a6-40fc-b747-a4f7c12a3bae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.833760 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-config" (OuterVolumeSpecName: "config") pod "c2216fee-68a6-40fc-b747-a4f7c12a3bae" (UID: "c2216fee-68a6-40fc-b747-a4f7c12a3bae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.840270 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.840300 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9zt9\" (UniqueName: \"kubernetes.io/projected/c2216fee-68a6-40fc-b747-a4f7c12a3bae-kube-api-access-j9zt9\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.840314 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.840322 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.843860 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c2216fee-68a6-40fc-b747-a4f7c12a3bae" (UID: "c2216fee-68a6-40fc-b747-a4f7c12a3bae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:24 crc kubenswrapper[4768]: I0223 18:49:24.942041 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c2216fee-68a6-40fc-b747-a4f7c12a3bae-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:25 crc kubenswrapper[4768]: I0223 18:49:25.536112 4768 generic.go:334] "Generic (PLEG): container finished" podID="b9b69ea0-d838-4dcf-be89-7d7385b50387" containerID="9fd0facb0bed81cf24660131e9a7b10a9b169d4f76fd95eaf7eb62e3db996109" exitCode=0 Feb 23 18:49:25 crc kubenswrapper[4768]: I0223 18:49:25.536183 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wh44n" event={"ID":"b9b69ea0-d838-4dcf-be89-7d7385b50387","Type":"ContainerDied","Data":"9fd0facb0bed81cf24660131e9a7b10a9b169d4f76fd95eaf7eb62e3db996109"} Feb 23 18:49:25 crc kubenswrapper[4768]: I0223 18:49:25.540319 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" event={"ID":"c2216fee-68a6-40fc-b747-a4f7c12a3bae","Type":"ContainerDied","Data":"f441fdedaab1da4cff885fc515dc5ebf214068655d6b30862ddbcaff7608bc56"} Feb 23 18:49:25 crc kubenswrapper[4768]: I0223 18:49:25.540394 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-g5jph" Feb 23 18:49:25 crc kubenswrapper[4768]: I0223 18:49:25.540406 4768 scope.go:117] "RemoveContainer" containerID="a220884a55aa97961e1b8dabd6b3082feff75ffa56e5645e11c335957f561702" Feb 23 18:49:25 crc kubenswrapper[4768]: I0223 18:49:25.543613 4768 generic.go:334] "Generic (PLEG): container finished" podID="66a7bcb3-6d0f-4dfe-9704-0b506184105a" containerID="c1c2091e11192e05561807845a68870367f3725dfb72a474d2be078b72b2d602" exitCode=0 Feb 23 18:49:25 crc kubenswrapper[4768]: I0223 18:49:25.543751 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dqqss" event={"ID":"66a7bcb3-6d0f-4dfe-9704-0b506184105a","Type":"ContainerDied","Data":"c1c2091e11192e05561807845a68870367f3725dfb72a474d2be078b72b2d602"} Feb 23 18:49:25 crc kubenswrapper[4768]: I0223 18:49:25.562192 4768 scope.go:117] "RemoveContainer" containerID="b5e7d86a4485a99134376aadf24a485ec925223058f591eb96c6f644611d5dff" Feb 23 18:49:25 crc kubenswrapper[4768]: I0223 18:49:25.615659 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-g5jph"] Feb 23 18:49:25 crc kubenswrapper[4768]: I0223 18:49:25.628353 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-g5jph"] Feb 23 18:49:25 crc kubenswrapper[4768]: I0223 18:49:25.863626 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:25 crc kubenswrapper[4768]: E0223 18:49:25.863888 4768 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 23 18:49:25 crc kubenswrapper[4768]: E0223 18:49:25.863919 4768 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 23 18:49:25 crc kubenswrapper[4768]: E0223 18:49:25.864010 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift podName:c2932248-edbb-4073-8a18-d076462b4201 nodeName:}" failed. No retries permitted until 2026-02-23 18:49:41.863984504 +0000 UTC m=+977.254470304 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift") pod "swift-storage-0" (UID: "c2932248-edbb-4073-8a18-d076462b4201") : configmap "swift-ring-files" not found Feb 23 18:49:25 crc kubenswrapper[4768]: I0223 18:49:25.884394 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.323113 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tjn26"] Feb 23 18:49:26 crc kubenswrapper[4768]: E0223 18:49:26.324150 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2216fee-68a6-40fc-b747-a4f7c12a3bae" containerName="dnsmasq-dns" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.324179 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2216fee-68a6-40fc-b747-a4f7c12a3bae" containerName="dnsmasq-dns" Feb 23 18:49:26 crc kubenswrapper[4768]: E0223 18:49:26.324198 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2216fee-68a6-40fc-b747-a4f7c12a3bae" containerName="init" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.324206 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2216fee-68a6-40fc-b747-a4f7c12a3bae" containerName="init" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.324499 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2216fee-68a6-40fc-b747-a4f7c12a3bae" containerName="dnsmasq-dns" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.325962 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.375741 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff6033d-2c50-420e-a764-e6e100dead6e-utilities\") pod \"certified-operators-tjn26\" (UID: \"eff6033d-2c50-420e-a764-e6e100dead6e\") " pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.375822 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff6033d-2c50-420e-a764-e6e100dead6e-catalog-content\") pod \"certified-operators-tjn26\" (UID: \"eff6033d-2c50-420e-a764-e6e100dead6e\") " pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.375860 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nch8c\" (UniqueName: \"kubernetes.io/projected/eff6033d-2c50-420e-a764-e6e100dead6e-kube-api-access-nch8c\") pod \"certified-operators-tjn26\" (UID: \"eff6033d-2c50-420e-a764-e6e100dead6e\") " pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.402113 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjn26"] Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.477176 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff6033d-2c50-420e-a764-e6e100dead6e-utilities\") pod \"certified-operators-tjn26\" (UID: \"eff6033d-2c50-420e-a764-e6e100dead6e\") " pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.477717 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff6033d-2c50-420e-a764-e6e100dead6e-utilities\") pod \"certified-operators-tjn26\" (UID: \"eff6033d-2c50-420e-a764-e6e100dead6e\") " pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.477804 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff6033d-2c50-420e-a764-e6e100dead6e-catalog-content\") pod \"certified-operators-tjn26\" (UID: \"eff6033d-2c50-420e-a764-e6e100dead6e\") " pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.477837 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nch8c\" (UniqueName: \"kubernetes.io/projected/eff6033d-2c50-420e-a764-e6e100dead6e-kube-api-access-nch8c\") pod \"certified-operators-tjn26\" (UID: \"eff6033d-2c50-420e-a764-e6e100dead6e\") " pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.478451 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff6033d-2c50-420e-a764-e6e100dead6e-catalog-content\") pod \"certified-operators-tjn26\" (UID: \"eff6033d-2c50-420e-a764-e6e100dead6e\") " pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.506774 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nch8c\" (UniqueName: \"kubernetes.io/projected/eff6033d-2c50-420e-a764-e6e100dead6e-kube-api-access-nch8c\") pod \"certified-operators-tjn26\" (UID: \"eff6033d-2c50-420e-a764-e6e100dead6e\") " pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.554882 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wh44n" event={"ID":"b9b69ea0-d838-4dcf-be89-7d7385b50387","Type":"ContainerStarted","Data":"ee75a98eb404ec9e47edac9fa7283fc2e7df0a865042bbdc6456173265c84e57"} Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.602459 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wh44n" podStartSLOduration=3.152277323 podStartE2EDuration="5.602427735s" podCreationTimestamp="2026-02-23 18:49:21 +0000 UTC" firstStartedPulling="2026-02-23 18:49:23.521811681 +0000 UTC m=+958.912297481" lastFinishedPulling="2026-02-23 18:49:25.971962093 +0000 UTC m=+961.362447893" observedRunningTime="2026-02-23 18:49:26.595844275 +0000 UTC m=+961.986330075" watchObservedRunningTime="2026-02-23 18:49:26.602427735 +0000 UTC m=+961.992913535" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.644702 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.683934 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-rfckb"] Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.685444 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.689910 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.690119 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bmfg2" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.712532 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rfckb"] Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.787144 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-config-data\") pod \"glance-db-sync-rfckb\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.787241 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-combined-ca-bundle\") pod \"glance-db-sync-rfckb\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.787295 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cjjk\" (UniqueName: \"kubernetes.io/projected/513bdad8-19c5-4fea-aaef-afecd7f21ab3-kube-api-access-6cjjk\") pod \"glance-db-sync-rfckb\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.787319 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-db-sync-config-data\") pod \"glance-db-sync-rfckb\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.889487 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-config-data\") pod \"glance-db-sync-rfckb\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.890011 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-combined-ca-bundle\") pod \"glance-db-sync-rfckb\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.890047 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cjjk\" (UniqueName: \"kubernetes.io/projected/513bdad8-19c5-4fea-aaef-afecd7f21ab3-kube-api-access-6cjjk\") pod \"glance-db-sync-rfckb\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.890071 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-db-sync-config-data\") pod \"glance-db-sync-rfckb\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.915190 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-db-sync-config-data\") pod \"glance-db-sync-rfckb\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.915274 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-combined-ca-bundle\") pod \"glance-db-sync-rfckb\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.921958 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cjjk\" (UniqueName: \"kubernetes.io/projected/513bdad8-19c5-4fea-aaef-afecd7f21ab3-kube-api-access-6cjjk\") pod \"glance-db-sync-rfckb\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:26 crc kubenswrapper[4768]: I0223 18:49:26.922471 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-config-data\") pod \"glance-db-sync-rfckb\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.081387 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.088184 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tjn26"] Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.330402 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dqqss" Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.376345 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2216fee-68a6-40fc-b747-a4f7c12a3bae" path="/var/lib/kubelet/pods/c2216fee-68a6-40fc-b747-a4f7c12a3bae/volumes" Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.399992 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57f9m\" (UniqueName: \"kubernetes.io/projected/66a7bcb3-6d0f-4dfe-9704-0b506184105a-kube-api-access-57f9m\") pod \"66a7bcb3-6d0f-4dfe-9704-0b506184105a\" (UID: \"66a7bcb3-6d0f-4dfe-9704-0b506184105a\") " Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.400132 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66a7bcb3-6d0f-4dfe-9704-0b506184105a-operator-scripts\") pod \"66a7bcb3-6d0f-4dfe-9704-0b506184105a\" (UID: \"66a7bcb3-6d0f-4dfe-9704-0b506184105a\") " Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.401354 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66a7bcb3-6d0f-4dfe-9704-0b506184105a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66a7bcb3-6d0f-4dfe-9704-0b506184105a" (UID: "66a7bcb3-6d0f-4dfe-9704-0b506184105a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.415712 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66a7bcb3-6d0f-4dfe-9704-0b506184105a-kube-api-access-57f9m" (OuterVolumeSpecName: "kube-api-access-57f9m") pod "66a7bcb3-6d0f-4dfe-9704-0b506184105a" (UID: "66a7bcb3-6d0f-4dfe-9704-0b506184105a"). InnerVolumeSpecName "kube-api-access-57f9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.503106 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66a7bcb3-6d0f-4dfe-9704-0b506184105a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.503170 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57f9m\" (UniqueName: \"kubernetes.io/projected/66a7bcb3-6d0f-4dfe-9704-0b506184105a-kube-api-access-57f9m\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.564774 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dqqss" event={"ID":"66a7bcb3-6d0f-4dfe-9704-0b506184105a","Type":"ContainerDied","Data":"d64ce49987c092998ffd41f9584e24c6b56f4b50e50b1a1582c0de54161aa3e7"} Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.564831 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dqqss" Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.564835 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d64ce49987c092998ffd41f9584e24c6b56f4b50e50b1a1582c0de54161aa3e7" Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.567629 4768 generic.go:334] "Generic (PLEG): container finished" podID="eff6033d-2c50-420e-a764-e6e100dead6e" containerID="8091ad0fa8d2246a68552a3250723263b303cf9a9e85190565f9e835c88e546e" exitCode=0 Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.568404 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjn26" event={"ID":"eff6033d-2c50-420e-a764-e6e100dead6e","Type":"ContainerDied","Data":"8091ad0fa8d2246a68552a3250723263b303cf9a9e85190565f9e835c88e546e"} Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.568477 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjn26" event={"ID":"eff6033d-2c50-420e-a764-e6e100dead6e","Type":"ContainerStarted","Data":"d4caf62c69332936519bd99a3b001d115008aeb91c2e4d4ed6032abfb798d23a"} Feb 23 18:49:27 crc kubenswrapper[4768]: I0223 18:49:27.819619 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-rfckb"] Feb 23 18:49:28 crc kubenswrapper[4768]: I0223 18:49:28.577824 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rfckb" event={"ID":"513bdad8-19c5-4fea-aaef-afecd7f21ab3","Type":"ContainerStarted","Data":"cc64bba9567279e67d79c99705fe47992bde06adaed0a8fc237ec86c6ef61877"} Feb 23 18:49:28 crc kubenswrapper[4768]: I0223 18:49:28.581730 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjn26" event={"ID":"eff6033d-2c50-420e-a764-e6e100dead6e","Type":"ContainerStarted","Data":"3214e8a06caaa7ad269810877da491b75a8f00fdf204ce934ccae4b1c3827abc"} Feb 23 18:49:28 crc kubenswrapper[4768]: E0223 18:49:28.973300 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeff6033d_2c50_420e_a764_e6e100dead6e.slice/crio-conmon-3214e8a06caaa7ad269810877da491b75a8f00fdf204ce934ccae4b1c3827abc.scope\": RecentStats: unable to find data in memory cache]" Feb 23 18:49:29 crc kubenswrapper[4768]: I0223 18:49:29.420337 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dqqss"] Feb 23 18:49:29 crc kubenswrapper[4768]: I0223 18:49:29.429545 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dqqss"] Feb 23 18:49:29 crc kubenswrapper[4768]: I0223 18:49:29.609129 4768 generic.go:334] "Generic (PLEG): container finished" podID="eff6033d-2c50-420e-a764-e6e100dead6e" containerID="3214e8a06caaa7ad269810877da491b75a8f00fdf204ce934ccae4b1c3827abc" exitCode=0 Feb 23 18:49:29 crc kubenswrapper[4768]: I0223 18:49:29.609183 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjn26" event={"ID":"eff6033d-2c50-420e-a764-e6e100dead6e","Type":"ContainerDied","Data":"3214e8a06caaa7ad269810877da491b75a8f00fdf204ce934ccae4b1c3827abc"} Feb 23 18:49:31 crc kubenswrapper[4768]: I0223 18:49:31.325288 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66a7bcb3-6d0f-4dfe-9704-0b506184105a" path="/var/lib/kubelet/pods/66a7bcb3-6d0f-4dfe-9704-0b506184105a/volumes" Feb 23 18:49:31 crc kubenswrapper[4768]: I0223 18:49:31.903675 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:31 crc kubenswrapper[4768]: I0223 18:49:31.903745 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.409772 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-7xj45" podUID="6c33d166-1e3e-46c5-a725-472499a5efab" containerName="ovn-controller" probeResult="failure" output=< Feb 23 18:49:32 crc kubenswrapper[4768]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 23 18:49:32 crc kubenswrapper[4768]: > Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.477622 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.482124 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9r6tg" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.635695 4768 generic.go:334] "Generic (PLEG): container finished" podID="1ddb02d3-f5a2-4681-90fe-4d5572fed381" containerID="989772dd29deb521ad2f4a6c75ff47ddda63a0aa8db873ae43a53cae6ab774cd" exitCode=0 Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.635780 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9nswb" event={"ID":"1ddb02d3-f5a2-4681-90fe-4d5572fed381","Type":"ContainerDied","Data":"989772dd29deb521ad2f4a6c75ff47ddda63a0aa8db873ae43a53cae6ab774cd"} Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.640988 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjn26" event={"ID":"eff6033d-2c50-420e-a764-e6e100dead6e","Type":"ContainerStarted","Data":"5d1e02f78f781b5eac88707d1e24e40242f97d495f10f4ed197ca79ef9e3b1a3"} Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.678930 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tjn26" podStartSLOduration=1.865287355 podStartE2EDuration="6.678909174s" podCreationTimestamp="2026-02-23 18:49:26 +0000 UTC" firstStartedPulling="2026-02-23 18:49:27.57006448 +0000 UTC m=+962.960550271" lastFinishedPulling="2026-02-23 18:49:32.38368629 +0000 UTC m=+967.774172090" observedRunningTime="2026-02-23 18:49:32.670922355 +0000 UTC m=+968.061408175" watchObservedRunningTime="2026-02-23 18:49:32.678909174 +0000 UTC m=+968.069394974" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.724081 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-7xj45-config-cxn9f"] Feb 23 18:49:32 crc kubenswrapper[4768]: E0223 18:49:32.724626 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66a7bcb3-6d0f-4dfe-9704-0b506184105a" containerName="mariadb-account-create-update" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.724645 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="66a7bcb3-6d0f-4dfe-9704-0b506184105a" containerName="mariadb-account-create-update" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.724807 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="66a7bcb3-6d0f-4dfe-9704-0b506184105a" containerName="mariadb-account-create-update" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.725311 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.726955 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.736584 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7xj45-config-cxn9f"] Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.820866 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-run-ovn\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.820974 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-run\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.821013 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj92x\" (UniqueName: \"kubernetes.io/projected/c9e8b257-8aad-49d5-9746-c4506abf436f-kube-api-access-tj92x\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.821054 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-log-ovn\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.821215 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b257-8aad-49d5-9746-c4506abf436f-scripts\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.821356 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b257-8aad-49d5-9746-c4506abf436f-additional-scripts\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.923178 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-run-ovn\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.923270 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-run\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.923555 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-run-ovn\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.923566 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj92x\" (UniqueName: \"kubernetes.io/projected/c9e8b257-8aad-49d5-9746-c4506abf436f-kube-api-access-tj92x\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.923700 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-log-ovn\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.923571 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-run\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.923766 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b257-8aad-49d5-9746-c4506abf436f-scripts\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.923813 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b257-8aad-49d5-9746-c4506abf436f-additional-scripts\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.923821 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-log-ovn\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.924758 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b257-8aad-49d5-9746-c4506abf436f-additional-scripts\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.925767 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b257-8aad-49d5-9746-c4506abf436f-scripts\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.941283 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj92x\" (UniqueName: \"kubernetes.io/projected/c9e8b257-8aad-49d5-9746-c4506abf436f-kube-api-access-tj92x\") pod \"ovn-controller-7xj45-config-cxn9f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:32 crc kubenswrapper[4768]: I0223 18:49:32.961901 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wh44n" podUID="b9b69ea0-d838-4dcf-be89-7d7385b50387" containerName="registry-server" probeResult="failure" output=< Feb 23 18:49:32 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 23 18:49:32 crc kubenswrapper[4768]: > Feb 23 18:49:33 crc kubenswrapper[4768]: I0223 18:49:33.047851 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:33 crc kubenswrapper[4768]: I0223 18:49:33.206242 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2cb8a262-174b-47ef-adb6-a67384a373f1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Feb 23 18:49:33 crc kubenswrapper[4768]: I0223 18:49:33.602971 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-7xj45-config-cxn9f"] Feb 23 18:49:33 crc kubenswrapper[4768]: I0223 18:49:33.665436 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7xj45-config-cxn9f" event={"ID":"c9e8b257-8aad-49d5-9746-c4506abf436f","Type":"ContainerStarted","Data":"1190cca14c749bfd793d6bc862932d990274f37a68dd534380274dbae46986d8"} Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.221170 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.350771 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ddb02d3-f5a2-4681-90fe-4d5572fed381-scripts\") pod \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.350808 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1ddb02d3-f5a2-4681-90fe-4d5572fed381-ring-data-devices\") pod \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.350846 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhj8p\" (UniqueName: \"kubernetes.io/projected/1ddb02d3-f5a2-4681-90fe-4d5572fed381-kube-api-access-zhj8p\") pod \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.350944 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-swiftconf\") pod \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.351001 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-combined-ca-bundle\") pod \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.351030 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-dispersionconf\") pod \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.351087 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1ddb02d3-f5a2-4681-90fe-4d5572fed381-etc-swift\") pod \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\" (UID: \"1ddb02d3-f5a2-4681-90fe-4d5572fed381\") " Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.352136 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ddb02d3-f5a2-4681-90fe-4d5572fed381-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1ddb02d3-f5a2-4681-90fe-4d5572fed381" (UID: "1ddb02d3-f5a2-4681-90fe-4d5572fed381"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.352413 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ddb02d3-f5a2-4681-90fe-4d5572fed381-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "1ddb02d3-f5a2-4681-90fe-4d5572fed381" (UID: "1ddb02d3-f5a2-4681-90fe-4d5572fed381"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.361040 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ddb02d3-f5a2-4681-90fe-4d5572fed381-kube-api-access-zhj8p" (OuterVolumeSpecName: "kube-api-access-zhj8p") pod "1ddb02d3-f5a2-4681-90fe-4d5572fed381" (UID: "1ddb02d3-f5a2-4681-90fe-4d5572fed381"). InnerVolumeSpecName "kube-api-access-zhj8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.374754 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "1ddb02d3-f5a2-4681-90fe-4d5572fed381" (UID: "1ddb02d3-f5a2-4681-90fe-4d5572fed381"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.385630 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ddb02d3-f5a2-4681-90fe-4d5572fed381-scripts" (OuterVolumeSpecName: "scripts") pod "1ddb02d3-f5a2-4681-90fe-4d5572fed381" (UID: "1ddb02d3-f5a2-4681-90fe-4d5572fed381"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.411287 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "1ddb02d3-f5a2-4681-90fe-4d5572fed381" (UID: "1ddb02d3-f5a2-4681-90fe-4d5572fed381"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.428743 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ddb02d3-f5a2-4681-90fe-4d5572fed381" (UID: "1ddb02d3-f5a2-4681-90fe-4d5572fed381"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.440081 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-42xrb"] Feb 23 18:49:34 crc kubenswrapper[4768]: E0223 18:49:34.440785 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ddb02d3-f5a2-4681-90fe-4d5572fed381" containerName="swift-ring-rebalance" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.440814 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ddb02d3-f5a2-4681-90fe-4d5572fed381" containerName="swift-ring-rebalance" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.441077 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ddb02d3-f5a2-4681-90fe-4d5572fed381" containerName="swift-ring-rebalance" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.441909 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-42xrb" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.447850 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-42xrb"] Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.448359 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.456375 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.456938 4768 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.456952 4768 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1ddb02d3-f5a2-4681-90fe-4d5572fed381-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.456963 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ddb02d3-f5a2-4681-90fe-4d5572fed381-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.456976 4768 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1ddb02d3-f5a2-4681-90fe-4d5572fed381-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.456987 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhj8p\" (UniqueName: \"kubernetes.io/projected/1ddb02d3-f5a2-4681-90fe-4d5572fed381-kube-api-access-zhj8p\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.457003 4768 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1ddb02d3-f5a2-4681-90fe-4d5572fed381-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.559577 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wfnx\" (UniqueName: \"kubernetes.io/projected/8580c06d-92c6-47e7-99ff-21b0ea32de64-kube-api-access-7wfnx\") pod \"root-account-create-update-42xrb\" (UID: \"8580c06d-92c6-47e7-99ff-21b0ea32de64\") " pod="openstack/root-account-create-update-42xrb" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.559691 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8580c06d-92c6-47e7-99ff-21b0ea32de64-operator-scripts\") pod \"root-account-create-update-42xrb\" (UID: \"8580c06d-92c6-47e7-99ff-21b0ea32de64\") " pod="openstack/root-account-create-update-42xrb" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.661875 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wfnx\" (UniqueName: \"kubernetes.io/projected/8580c06d-92c6-47e7-99ff-21b0ea32de64-kube-api-access-7wfnx\") pod \"root-account-create-update-42xrb\" (UID: \"8580c06d-92c6-47e7-99ff-21b0ea32de64\") " pod="openstack/root-account-create-update-42xrb" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.661956 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8580c06d-92c6-47e7-99ff-21b0ea32de64-operator-scripts\") pod \"root-account-create-update-42xrb\" (UID: \"8580c06d-92c6-47e7-99ff-21b0ea32de64\") " pod="openstack/root-account-create-update-42xrb" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.662708 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8580c06d-92c6-47e7-99ff-21b0ea32de64-operator-scripts\") pod \"root-account-create-update-42xrb\" (UID: \"8580c06d-92c6-47e7-99ff-21b0ea32de64\") " pod="openstack/root-account-create-update-42xrb" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.681051 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wfnx\" (UniqueName: \"kubernetes.io/projected/8580c06d-92c6-47e7-99ff-21b0ea32de64-kube-api-access-7wfnx\") pod \"root-account-create-update-42xrb\" (UID: \"8580c06d-92c6-47e7-99ff-21b0ea32de64\") " pod="openstack/root-account-create-update-42xrb" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.682263 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7xj45-config-cxn9f" event={"ID":"c9e8b257-8aad-49d5-9746-c4506abf436f","Type":"ContainerStarted","Data":"e225380e02494ec42b14a00ef618931f63d766367eccf9085b5b44f5a893e725"} Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.684314 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9nswb" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.684232 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9nswb" event={"ID":"1ddb02d3-f5a2-4681-90fe-4d5572fed381","Type":"ContainerDied","Data":"8c01dd63fc22d75f57018f511e302ae41a55641171eaabef31bc1de88a0855c7"} Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.690896 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c01dd63fc22d75f57018f511e302ae41a55641171eaabef31bc1de88a0855c7" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.716309 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-7xj45-config-cxn9f" podStartSLOduration=2.716281991 podStartE2EDuration="2.716281991s" podCreationTimestamp="2026-02-23 18:49:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:49:34.705825065 +0000 UTC m=+970.096310865" watchObservedRunningTime="2026-02-23 18:49:34.716281991 +0000 UTC m=+970.106767791" Feb 23 18:49:34 crc kubenswrapper[4768]: I0223 18:49:34.765178 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-42xrb" Feb 23 18:49:35 crc kubenswrapper[4768]: I0223 18:49:35.244637 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-42xrb"] Feb 23 18:49:35 crc kubenswrapper[4768]: I0223 18:49:35.698567 4768 generic.go:334] "Generic (PLEG): container finished" podID="c9e8b257-8aad-49d5-9746-c4506abf436f" containerID="e225380e02494ec42b14a00ef618931f63d766367eccf9085b5b44f5a893e725" exitCode=0 Feb 23 18:49:35 crc kubenswrapper[4768]: I0223 18:49:35.698609 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7xj45-config-cxn9f" event={"ID":"c9e8b257-8aad-49d5-9746-c4506abf436f","Type":"ContainerDied","Data":"e225380e02494ec42b14a00ef618931f63d766367eccf9085b5b44f5a893e725"} Feb 23 18:49:36 crc kubenswrapper[4768]: I0223 18:49:36.645522 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:36 crc kubenswrapper[4768]: I0223 18:49:36.645968 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:36 crc kubenswrapper[4768]: I0223 18:49:36.712773 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:37 crc kubenswrapper[4768]: I0223 18:49:37.440259 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-7xj45" Feb 23 18:49:39 crc kubenswrapper[4768]: I0223 18:49:39.545465 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:49:39 crc kubenswrapper[4768]: I0223 18:49:39.545784 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:49:41 crc kubenswrapper[4768]: I0223 18:49:41.934201 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:41 crc kubenswrapper[4768]: I0223 18:49:41.967240 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c2932248-edbb-4073-8a18-d076462b4201-etc-swift\") pod \"swift-storage-0\" (UID: \"c2932248-edbb-4073-8a18-d076462b4201\") " pod="openstack/swift-storage-0" Feb 23 18:49:41 crc kubenswrapper[4768]: I0223 18:49:41.972953 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:41 crc kubenswrapper[4768]: I0223 18:49:41.992071 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 23 18:49:42 crc kubenswrapper[4768]: I0223 18:49:42.030455 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:42 crc kubenswrapper[4768]: I0223 18:49:42.221143 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wh44n"] Feb 23 18:49:42 crc kubenswrapper[4768]: I0223 18:49:42.861720 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.203426 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.364605 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-9ntcv"] Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.420551 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9ntcv" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.430349 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9ntcv"] Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.485625 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-016f-account-create-update-vckb8"] Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.487572 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-016f-account-create-update-vckb8" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.492622 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.520757 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.561173 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-016f-account-create-update-vckb8"] Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.605973 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj92x\" (UniqueName: \"kubernetes.io/projected/c9e8b257-8aad-49d5-9746-c4506abf436f-kube-api-access-tj92x\") pod \"c9e8b257-8aad-49d5-9746-c4506abf436f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.606034 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-log-ovn\") pod \"c9e8b257-8aad-49d5-9746-c4506abf436f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.606065 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b257-8aad-49d5-9746-c4506abf436f-additional-scripts\") pod \"c9e8b257-8aad-49d5-9746-c4506abf436f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.606133 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-run-ovn\") pod \"c9e8b257-8aad-49d5-9746-c4506abf436f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.606277 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-run\") pod \"c9e8b257-8aad-49d5-9746-c4506abf436f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.606307 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b257-8aad-49d5-9746-c4506abf436f-scripts\") pod \"c9e8b257-8aad-49d5-9746-c4506abf436f\" (UID: \"c9e8b257-8aad-49d5-9746-c4506abf436f\") " Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.606519 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dee7c39-12a7-42a0-8c19-3420b5dcb63e-operator-scripts\") pod \"cinder-016f-account-create-update-vckb8\" (UID: \"3dee7c39-12a7-42a0-8c19-3420b5dcb63e\") " pod="openstack/cinder-016f-account-create-update-vckb8" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.606594 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qftsv\" (UniqueName: \"kubernetes.io/projected/3dee7c39-12a7-42a0-8c19-3420b5dcb63e-kube-api-access-qftsv\") pod \"cinder-016f-account-create-update-vckb8\" (UID: \"3dee7c39-12a7-42a0-8c19-3420b5dcb63e\") " pod="openstack/cinder-016f-account-create-update-vckb8" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.606679 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07034a67-ca3d-4e5f-936a-b32c08b85724-operator-scripts\") pod \"cinder-db-create-9ntcv\" (UID: \"07034a67-ca3d-4e5f-936a-b32c08b85724\") " pod="openstack/cinder-db-create-9ntcv" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.606700 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j6vw\" (UniqueName: \"kubernetes.io/projected/07034a67-ca3d-4e5f-936a-b32c08b85724-kube-api-access-5j6vw\") pod \"cinder-db-create-9ntcv\" (UID: \"07034a67-ca3d-4e5f-936a-b32c08b85724\") " pod="openstack/cinder-db-create-9ntcv" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.607450 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-run" (OuterVolumeSpecName: "var-run") pod "c9e8b257-8aad-49d5-9746-c4506abf436f" (UID: "c9e8b257-8aad-49d5-9746-c4506abf436f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.607545 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "c9e8b257-8aad-49d5-9746-c4506abf436f" (UID: "c9e8b257-8aad-49d5-9746-c4506abf436f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.608608 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e8b257-8aad-49d5-9746-c4506abf436f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "c9e8b257-8aad-49d5-9746-c4506abf436f" (UID: "c9e8b257-8aad-49d5-9746-c4506abf436f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.609780 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "c9e8b257-8aad-49d5-9746-c4506abf436f" (UID: "c9e8b257-8aad-49d5-9746-c4506abf436f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.610808 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e8b257-8aad-49d5-9746-c4506abf436f-scripts" (OuterVolumeSpecName: "scripts") pod "c9e8b257-8aad-49d5-9746-c4506abf436f" (UID: "c9e8b257-8aad-49d5-9746-c4506abf436f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.625488 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-t6rxx"] Feb 23 18:49:43 crc kubenswrapper[4768]: E0223 18:49:43.626424 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e8b257-8aad-49d5-9746-c4506abf436f" containerName="ovn-config" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.626445 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e8b257-8aad-49d5-9746-c4506abf436f" containerName="ovn-config" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.626847 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e8b257-8aad-49d5-9746-c4506abf436f" containerName="ovn-config" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.627501 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-t6rxx" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.633563 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e8b257-8aad-49d5-9746-c4506abf436f-kube-api-access-tj92x" (OuterVolumeSpecName: "kube-api-access-tj92x") pod "c9e8b257-8aad-49d5-9746-c4506abf436f" (UID: "c9e8b257-8aad-49d5-9746-c4506abf436f"). InnerVolumeSpecName "kube-api-access-tj92x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.666317 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-c8v6w"] Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.667617 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-c8v6w" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.679875 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-t6rxx"] Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.681823 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.682099 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.682377 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ftws5" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.682374 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.696603 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-c8v6w"] Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.708368 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07034a67-ca3d-4e5f-936a-b32c08b85724-operator-scripts\") pod \"cinder-db-create-9ntcv\" (UID: \"07034a67-ca3d-4e5f-936a-b32c08b85724\") " pod="openstack/cinder-db-create-9ntcv" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.708433 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j6vw\" (UniqueName: \"kubernetes.io/projected/07034a67-ca3d-4e5f-936a-b32c08b85724-kube-api-access-5j6vw\") pod \"cinder-db-create-9ntcv\" (UID: \"07034a67-ca3d-4e5f-936a-b32c08b85724\") " pod="openstack/cinder-db-create-9ntcv" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.708457 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2936a6fe-a582-43cb-a967-e99ba45903ea-operator-scripts\") pod \"barbican-db-create-t6rxx\" (UID: \"2936a6fe-a582-43cb-a967-e99ba45903ea\") " pod="openstack/barbican-db-create-t6rxx" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.708502 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dee7c39-12a7-42a0-8c19-3420b5dcb63e-operator-scripts\") pod \"cinder-016f-account-create-update-vckb8\" (UID: \"3dee7c39-12a7-42a0-8c19-3420b5dcb63e\") " pod="openstack/cinder-016f-account-create-update-vckb8" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.708555 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qftsv\" (UniqueName: \"kubernetes.io/projected/3dee7c39-12a7-42a0-8c19-3420b5dcb63e-kube-api-access-qftsv\") pod \"cinder-016f-account-create-update-vckb8\" (UID: \"3dee7c39-12a7-42a0-8c19-3420b5dcb63e\") " pod="openstack/cinder-016f-account-create-update-vckb8" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.708586 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbnrt\" (UniqueName: \"kubernetes.io/projected/2936a6fe-a582-43cb-a967-e99ba45903ea-kube-api-access-hbnrt\") pod \"barbican-db-create-t6rxx\" (UID: \"2936a6fe-a582-43cb-a967-e99ba45903ea\") " pod="openstack/barbican-db-create-t6rxx" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.708637 4768 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.708647 4768 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-run\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.708655 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b257-8aad-49d5-9746-c4506abf436f-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.708667 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj92x\" (UniqueName: \"kubernetes.io/projected/c9e8b257-8aad-49d5-9746-c4506abf436f-kube-api-access-tj92x\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.708677 4768 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9e8b257-8aad-49d5-9746-c4506abf436f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.708686 4768 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/c9e8b257-8aad-49d5-9746-c4506abf436f-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.709463 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07034a67-ca3d-4e5f-936a-b32c08b85724-operator-scripts\") pod \"cinder-db-create-9ntcv\" (UID: \"07034a67-ca3d-4e5f-936a-b32c08b85724\") " pod="openstack/cinder-db-create-9ntcv" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.709966 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dee7c39-12a7-42a0-8c19-3420b5dcb63e-operator-scripts\") pod \"cinder-016f-account-create-update-vckb8\" (UID: \"3dee7c39-12a7-42a0-8c19-3420b5dcb63e\") " pod="openstack/cinder-016f-account-create-update-vckb8" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.739893 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qftsv\" (UniqueName: \"kubernetes.io/projected/3dee7c39-12a7-42a0-8c19-3420b5dcb63e-kube-api-access-qftsv\") pod \"cinder-016f-account-create-update-vckb8\" (UID: \"3dee7c39-12a7-42a0-8c19-3420b5dcb63e\") " pod="openstack/cinder-016f-account-create-update-vckb8" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.761691 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j6vw\" (UniqueName: \"kubernetes.io/projected/07034a67-ca3d-4e5f-936a-b32c08b85724-kube-api-access-5j6vw\") pod \"cinder-db-create-9ntcv\" (UID: \"07034a67-ca3d-4e5f-936a-b32c08b85724\") " pod="openstack/cinder-db-create-9ntcv" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.780738 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-xbwcw"] Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.782323 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xbwcw" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.799767 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-42xrb" event={"ID":"8580c06d-92c6-47e7-99ff-21b0ea32de64","Type":"ContainerStarted","Data":"a6cd6bf00a3122d76367a74eb472032860570b39e49199df5e7d824b059baab4"} Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.799822 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-42xrb" event={"ID":"8580c06d-92c6-47e7-99ff-21b0ea32de64","Type":"ContainerStarted","Data":"297b1897026ced53c027d43b2668222d7c0bbc3e0cbc09f397404002cb3433ab"} Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.806521 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xbwcw"] Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.811858 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-7xj45-config-cxn9f" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.811940 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wh44n" podUID="b9b69ea0-d838-4dcf-be89-7d7385b50387" containerName="registry-server" containerID="cri-o://ee75a98eb404ec9e47edac9fa7283fc2e7df0a865042bbdc6456173265c84e57" gracePeriod=2 Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.812007 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-7xj45-config-cxn9f" event={"ID":"c9e8b257-8aad-49d5-9746-c4506abf436f","Type":"ContainerDied","Data":"1190cca14c749bfd793d6bc862932d990274f37a68dd534380274dbae46986d8"} Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.812089 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1190cca14c749bfd793d6bc862932d990274f37a68dd534380274dbae46986d8" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.813288 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2936a6fe-a582-43cb-a967-e99ba45903ea-operator-scripts\") pod \"barbican-db-create-t6rxx\" (UID: \"2936a6fe-a582-43cb-a967-e99ba45903ea\") " pod="openstack/barbican-db-create-t6rxx" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.813358 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b252582c-b708-4d5d-be78-dc90b4bd3990-combined-ca-bundle\") pod \"keystone-db-sync-c8v6w\" (UID: \"b252582c-b708-4d5d-be78-dc90b4bd3990\") " pod="openstack/keystone-db-sync-c8v6w" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.813615 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b252582c-b708-4d5d-be78-dc90b4bd3990-config-data\") pod \"keystone-db-sync-c8v6w\" (UID: \"b252582c-b708-4d5d-be78-dc90b4bd3990\") " pod="openstack/keystone-db-sync-c8v6w" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.813706 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98dwv\" (UniqueName: \"kubernetes.io/projected/b252582c-b708-4d5d-be78-dc90b4bd3990-kube-api-access-98dwv\") pod \"keystone-db-sync-c8v6w\" (UID: \"b252582c-b708-4d5d-be78-dc90b4bd3990\") " pod="openstack/keystone-db-sync-c8v6w" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.813759 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbnrt\" (UniqueName: \"kubernetes.io/projected/2936a6fe-a582-43cb-a967-e99ba45903ea-kube-api-access-hbnrt\") pod \"barbican-db-create-t6rxx\" (UID: \"2936a6fe-a582-43cb-a967-e99ba45903ea\") " pod="openstack/barbican-db-create-t6rxx" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.817180 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2936a6fe-a582-43cb-a967-e99ba45903ea-operator-scripts\") pod \"barbican-db-create-t6rxx\" (UID: \"2936a6fe-a582-43cb-a967-e99ba45903ea\") " pod="openstack/barbican-db-create-t6rxx" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.826441 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-195c-account-create-update-2pdfs"] Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.838269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbnrt\" (UniqueName: \"kubernetes.io/projected/2936a6fe-a582-43cb-a967-e99ba45903ea-kube-api-access-hbnrt\") pod \"barbican-db-create-t6rxx\" (UID: \"2936a6fe-a582-43cb-a967-e99ba45903ea\") " pod="openstack/barbican-db-create-t6rxx" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.843491 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-195c-account-create-update-2pdfs" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.849061 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.858680 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-195c-account-create-update-2pdfs"] Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.860222 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9ntcv" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.889398 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-42xrb" podStartSLOduration=9.889374025 podStartE2EDuration="9.889374025s" podCreationTimestamp="2026-02-23 18:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:49:43.831938051 +0000 UTC m=+979.222423851" watchObservedRunningTime="2026-02-23 18:49:43.889374025 +0000 UTC m=+979.279859825" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.905135 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cd70-account-create-update-59jt8"] Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.906430 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd70-account-create-update-59jt8" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.911493 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.916204 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b252582c-b708-4d5d-be78-dc90b4bd3990-combined-ca-bundle\") pod \"keystone-db-sync-c8v6w\" (UID: \"b252582c-b708-4d5d-be78-dc90b4bd3990\") " pod="openstack/keystone-db-sync-c8v6w" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.916328 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8c97\" (UniqueName: \"kubernetes.io/projected/786b1f7f-e1c7-4002-a1db-33c44f0ad098-kube-api-access-l8c97\") pod \"neutron-db-create-xbwcw\" (UID: \"786b1f7f-e1c7-4002-a1db-33c44f0ad098\") " pod="openstack/neutron-db-create-xbwcw" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.916432 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/786b1f7f-e1c7-4002-a1db-33c44f0ad098-operator-scripts\") pod \"neutron-db-create-xbwcw\" (UID: \"786b1f7f-e1c7-4002-a1db-33c44f0ad098\") " pod="openstack/neutron-db-create-xbwcw" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.916512 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b252582c-b708-4d5d-be78-dc90b4bd3990-config-data\") pod \"keystone-db-sync-c8v6w\" (UID: \"b252582c-b708-4d5d-be78-dc90b4bd3990\") " pod="openstack/keystone-db-sync-c8v6w" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.916570 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98dwv\" (UniqueName: \"kubernetes.io/projected/b252582c-b708-4d5d-be78-dc90b4bd3990-kube-api-access-98dwv\") pod \"keystone-db-sync-c8v6w\" (UID: \"b252582c-b708-4d5d-be78-dc90b4bd3990\") " pod="openstack/keystone-db-sync-c8v6w" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.918674 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cd70-account-create-update-59jt8"] Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.922038 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b252582c-b708-4d5d-be78-dc90b4bd3990-config-data\") pod \"keystone-db-sync-c8v6w\" (UID: \"b252582c-b708-4d5d-be78-dc90b4bd3990\") " pod="openstack/keystone-db-sync-c8v6w" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.930672 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b252582c-b708-4d5d-be78-dc90b4bd3990-combined-ca-bundle\") pod \"keystone-db-sync-c8v6w\" (UID: \"b252582c-b708-4d5d-be78-dc90b4bd3990\") " pod="openstack/keystone-db-sync-c8v6w" Feb 23 18:49:43 crc kubenswrapper[4768]: I0223 18:49:43.942275 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98dwv\" (UniqueName: \"kubernetes.io/projected/b252582c-b708-4d5d-be78-dc90b4bd3990-kube-api-access-98dwv\") pod \"keystone-db-sync-c8v6w\" (UID: \"b252582c-b708-4d5d-be78-dc90b4bd3990\") " pod="openstack/keystone-db-sync-c8v6w" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.008234 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-016f-account-create-update-vckb8" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.010903 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-t6rxx" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.018029 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c008f19-c09b-4721-9a15-9851f9a516ab-operator-scripts\") pod \"barbican-195c-account-create-update-2pdfs\" (UID: \"7c008f19-c09b-4721-9a15-9851f9a516ab\") " pod="openstack/barbican-195c-account-create-update-2pdfs" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.018087 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8c97\" (UniqueName: \"kubernetes.io/projected/786b1f7f-e1c7-4002-a1db-33c44f0ad098-kube-api-access-l8c97\") pod \"neutron-db-create-xbwcw\" (UID: \"786b1f7f-e1c7-4002-a1db-33c44f0ad098\") " pod="openstack/neutron-db-create-xbwcw" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.018140 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a997222-9831-4e01-ac9b-34383ec3649e-operator-scripts\") pod \"neutron-cd70-account-create-update-59jt8\" (UID: \"4a997222-9831-4e01-ac9b-34383ec3649e\") " pod="openstack/neutron-cd70-account-create-update-59jt8" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.018166 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/786b1f7f-e1c7-4002-a1db-33c44f0ad098-operator-scripts\") pod \"neutron-db-create-xbwcw\" (UID: \"786b1f7f-e1c7-4002-a1db-33c44f0ad098\") " pod="openstack/neutron-db-create-xbwcw" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.018199 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tt8d\" (UniqueName: \"kubernetes.io/projected/4a997222-9831-4e01-ac9b-34383ec3649e-kube-api-access-6tt8d\") pod \"neutron-cd70-account-create-update-59jt8\" (UID: \"4a997222-9831-4e01-ac9b-34383ec3649e\") " pod="openstack/neutron-cd70-account-create-update-59jt8" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.018291 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt7vk\" (UniqueName: \"kubernetes.io/projected/7c008f19-c09b-4721-9a15-9851f9a516ab-kube-api-access-wt7vk\") pod \"barbican-195c-account-create-update-2pdfs\" (UID: \"7c008f19-c09b-4721-9a15-9851f9a516ab\") " pod="openstack/barbican-195c-account-create-update-2pdfs" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.019287 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/786b1f7f-e1c7-4002-a1db-33c44f0ad098-operator-scripts\") pod \"neutron-db-create-xbwcw\" (UID: \"786b1f7f-e1c7-4002-a1db-33c44f0ad098\") " pod="openstack/neutron-db-create-xbwcw" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.044207 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8c97\" (UniqueName: \"kubernetes.io/projected/786b1f7f-e1c7-4002-a1db-33c44f0ad098-kube-api-access-l8c97\") pod \"neutron-db-create-xbwcw\" (UID: \"786b1f7f-e1c7-4002-a1db-33c44f0ad098\") " pod="openstack/neutron-db-create-xbwcw" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.054004 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.071510 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-c8v6w" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.113000 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xbwcw" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.120430 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c008f19-c09b-4721-9a15-9851f9a516ab-operator-scripts\") pod \"barbican-195c-account-create-update-2pdfs\" (UID: \"7c008f19-c09b-4721-9a15-9851f9a516ab\") " pod="openstack/barbican-195c-account-create-update-2pdfs" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.121065 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c008f19-c09b-4721-9a15-9851f9a516ab-operator-scripts\") pod \"barbican-195c-account-create-update-2pdfs\" (UID: \"7c008f19-c09b-4721-9a15-9851f9a516ab\") " pod="openstack/barbican-195c-account-create-update-2pdfs" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.121673 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a997222-9831-4e01-ac9b-34383ec3649e-operator-scripts\") pod \"neutron-cd70-account-create-update-59jt8\" (UID: \"4a997222-9831-4e01-ac9b-34383ec3649e\") " pod="openstack/neutron-cd70-account-create-update-59jt8" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.121155 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a997222-9831-4e01-ac9b-34383ec3649e-operator-scripts\") pod \"neutron-cd70-account-create-update-59jt8\" (UID: \"4a997222-9831-4e01-ac9b-34383ec3649e\") " pod="openstack/neutron-cd70-account-create-update-59jt8" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.121746 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tt8d\" (UniqueName: \"kubernetes.io/projected/4a997222-9831-4e01-ac9b-34383ec3649e-kube-api-access-6tt8d\") pod \"neutron-cd70-account-create-update-59jt8\" (UID: \"4a997222-9831-4e01-ac9b-34383ec3649e\") " pod="openstack/neutron-cd70-account-create-update-59jt8" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.122068 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt7vk\" (UniqueName: \"kubernetes.io/projected/7c008f19-c09b-4721-9a15-9851f9a516ab-kube-api-access-wt7vk\") pod \"barbican-195c-account-create-update-2pdfs\" (UID: \"7c008f19-c09b-4721-9a15-9851f9a516ab\") " pod="openstack/barbican-195c-account-create-update-2pdfs" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.141146 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tt8d\" (UniqueName: \"kubernetes.io/projected/4a997222-9831-4e01-ac9b-34383ec3649e-kube-api-access-6tt8d\") pod \"neutron-cd70-account-create-update-59jt8\" (UID: \"4a997222-9831-4e01-ac9b-34383ec3649e\") " pod="openstack/neutron-cd70-account-create-update-59jt8" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.146924 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt7vk\" (UniqueName: \"kubernetes.io/projected/7c008f19-c09b-4721-9a15-9851f9a516ab-kube-api-access-wt7vk\") pod \"barbican-195c-account-create-update-2pdfs\" (UID: \"7c008f19-c09b-4721-9a15-9851f9a516ab\") " pod="openstack/barbican-195c-account-create-update-2pdfs" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.182169 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-195c-account-create-update-2pdfs" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.260440 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd70-account-create-update-59jt8" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.368646 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-9ntcv"] Feb 23 18:49:44 crc kubenswrapper[4768]: W0223 18:49:44.414930 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07034a67_ca3d_4e5f_936a_b32c08b85724.slice/crio-26c8282aaaa507e6bfa469ee5a53b678d9c865a897db1cf574150a184b657632 WatchSource:0}: Error finding container 26c8282aaaa507e6bfa469ee5a53b678d9c865a897db1cf574150a184b657632: Status 404 returned error can't find the container with id 26c8282aaaa507e6bfa469ee5a53b678d9c865a897db1cf574150a184b657632 Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.633563 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6g89c"] Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.636156 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.647004 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6g89c"] Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.658180 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-7xj45-config-cxn9f"] Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.674825 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-7xj45-config-cxn9f"] Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.751100 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7r4v\" (UniqueName: \"kubernetes.io/projected/8593dfa7-1021-4be4-8828-5cdbf51aef72-kube-api-access-p7r4v\") pod \"redhat-marketplace-6g89c\" (UID: \"8593dfa7-1021-4be4-8828-5cdbf51aef72\") " pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.751430 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8593dfa7-1021-4be4-8828-5cdbf51aef72-catalog-content\") pod \"redhat-marketplace-6g89c\" (UID: \"8593dfa7-1021-4be4-8828-5cdbf51aef72\") " pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.751642 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8593dfa7-1021-4be4-8828-5cdbf51aef72-utilities\") pod \"redhat-marketplace-6g89c\" (UID: \"8593dfa7-1021-4be4-8828-5cdbf51aef72\") " pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.754715 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.854025 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b69ea0-d838-4dcf-be89-7d7385b50387-utilities\") pod \"b9b69ea0-d838-4dcf-be89-7d7385b50387\" (UID: \"b9b69ea0-d838-4dcf-be89-7d7385b50387\") " Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.854154 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9d2ms\" (UniqueName: \"kubernetes.io/projected/b9b69ea0-d838-4dcf-be89-7d7385b50387-kube-api-access-9d2ms\") pod \"b9b69ea0-d838-4dcf-be89-7d7385b50387\" (UID: \"b9b69ea0-d838-4dcf-be89-7d7385b50387\") " Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.854221 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b69ea0-d838-4dcf-be89-7d7385b50387-catalog-content\") pod \"b9b69ea0-d838-4dcf-be89-7d7385b50387\" (UID: \"b9b69ea0-d838-4dcf-be89-7d7385b50387\") " Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.854437 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8593dfa7-1021-4be4-8828-5cdbf51aef72-catalog-content\") pod \"redhat-marketplace-6g89c\" (UID: \"8593dfa7-1021-4be4-8828-5cdbf51aef72\") " pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.854492 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8593dfa7-1021-4be4-8828-5cdbf51aef72-utilities\") pod \"redhat-marketplace-6g89c\" (UID: \"8593dfa7-1021-4be4-8828-5cdbf51aef72\") " pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.854559 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7r4v\" (UniqueName: \"kubernetes.io/projected/8593dfa7-1021-4be4-8828-5cdbf51aef72-kube-api-access-p7r4v\") pod \"redhat-marketplace-6g89c\" (UID: \"8593dfa7-1021-4be4-8828-5cdbf51aef72\") " pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.855804 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b69ea0-d838-4dcf-be89-7d7385b50387-utilities" (OuterVolumeSpecName: "utilities") pod "b9b69ea0-d838-4dcf-be89-7d7385b50387" (UID: "b9b69ea0-d838-4dcf-be89-7d7385b50387"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.855822 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8593dfa7-1021-4be4-8828-5cdbf51aef72-catalog-content\") pod \"redhat-marketplace-6g89c\" (UID: \"8593dfa7-1021-4be4-8828-5cdbf51aef72\") " pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.856065 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8593dfa7-1021-4be4-8828-5cdbf51aef72-utilities\") pod \"redhat-marketplace-6g89c\" (UID: \"8593dfa7-1021-4be4-8828-5cdbf51aef72\") " pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.860582 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9ntcv" event={"ID":"07034a67-ca3d-4e5f-936a-b32c08b85724","Type":"ContainerStarted","Data":"26c8282aaaa507e6bfa469ee5a53b678d9c865a897db1cf574150a184b657632"} Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.872397 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b69ea0-d838-4dcf-be89-7d7385b50387-kube-api-access-9d2ms" (OuterVolumeSpecName: "kube-api-access-9d2ms") pod "b9b69ea0-d838-4dcf-be89-7d7385b50387" (UID: "b9b69ea0-d838-4dcf-be89-7d7385b50387"). InnerVolumeSpecName "kube-api-access-9d2ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.872841 4768 generic.go:334] "Generic (PLEG): container finished" podID="b9b69ea0-d838-4dcf-be89-7d7385b50387" containerID="ee75a98eb404ec9e47edac9fa7283fc2e7df0a865042bbdc6456173265c84e57" exitCode=0 Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.872914 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wh44n" event={"ID":"b9b69ea0-d838-4dcf-be89-7d7385b50387","Type":"ContainerDied","Data":"ee75a98eb404ec9e47edac9fa7283fc2e7df0a865042bbdc6456173265c84e57"} Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.872953 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wh44n" event={"ID":"b9b69ea0-d838-4dcf-be89-7d7385b50387","Type":"ContainerDied","Data":"c4507e63999d36c8a491c5a99c2a1d619b7f504127600b67d214e008c3a0e680"} Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.872974 4768 scope.go:117] "RemoveContainer" containerID="ee75a98eb404ec9e47edac9fa7283fc2e7df0a865042bbdc6456173265c84e57" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.873142 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wh44n" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.880461 4768 generic.go:334] "Generic (PLEG): container finished" podID="8580c06d-92c6-47e7-99ff-21b0ea32de64" containerID="a6cd6bf00a3122d76367a74eb472032860570b39e49199df5e7d824b059baab4" exitCode=0 Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.880627 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-42xrb" event={"ID":"8580c06d-92c6-47e7-99ff-21b0ea32de64","Type":"ContainerDied","Data":"a6cd6bf00a3122d76367a74eb472032860570b39e49199df5e7d824b059baab4"} Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.888095 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"c5c55ec2ee0b7efe65d71ee6ccbaec39aea46d4f0bc0a58e91a2187136a0980d"} Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.896284 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7r4v\" (UniqueName: \"kubernetes.io/projected/8593dfa7-1021-4be4-8828-5cdbf51aef72-kube-api-access-p7r4v\") pod \"redhat-marketplace-6g89c\" (UID: \"8593dfa7-1021-4be4-8828-5cdbf51aef72\") " pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.922537 4768 scope.go:117] "RemoveContainer" containerID="9fd0facb0bed81cf24660131e9a7b10a9b169d4f76fd95eaf7eb62e3db996109" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.927531 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-c8v6w"] Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.955763 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9b69ea0-d838-4dcf-be89-7d7385b50387-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.955788 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9d2ms\" (UniqueName: \"kubernetes.io/projected/b9b69ea0-d838-4dcf-be89-7d7385b50387-kube-api-access-9d2ms\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:44 crc kubenswrapper[4768]: I0223 18:49:44.987840 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-t6rxx"] Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.015923 4768 scope.go:117] "RemoveContainer" containerID="ccb11c19426071a44bb170dd3112d7f62a4b6d2b997f312150997d682568fea9" Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.020733 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-016f-account-create-update-vckb8"] Feb 23 18:49:45 crc kubenswrapper[4768]: W0223 18:49:45.026373 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dee7c39_12a7_42a0_8c19_3420b5dcb63e.slice/crio-6a61c9f690a49ffb9974f834fb4ae93bbba530d90fac82e31096526fbc0c4480 WatchSource:0}: Error finding container 6a61c9f690a49ffb9974f834fb4ae93bbba530d90fac82e31096526fbc0c4480: Status 404 returned error can't find the container with id 6a61c9f690a49ffb9974f834fb4ae93bbba530d90fac82e31096526fbc0c4480 Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.084985 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.090601 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-195c-account-create-update-2pdfs"] Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.091740 4768 scope.go:117] "RemoveContainer" containerID="ee75a98eb404ec9e47edac9fa7283fc2e7df0a865042bbdc6456173265c84e57" Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.096971 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-xbwcw"] Feb 23 18:49:45 crc kubenswrapper[4768]: E0223 18:49:45.112458 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee75a98eb404ec9e47edac9fa7283fc2e7df0a865042bbdc6456173265c84e57\": container with ID starting with ee75a98eb404ec9e47edac9fa7283fc2e7df0a865042bbdc6456173265c84e57 not found: ID does not exist" containerID="ee75a98eb404ec9e47edac9fa7283fc2e7df0a865042bbdc6456173265c84e57" Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.112508 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee75a98eb404ec9e47edac9fa7283fc2e7df0a865042bbdc6456173265c84e57"} err="failed to get container status \"ee75a98eb404ec9e47edac9fa7283fc2e7df0a865042bbdc6456173265c84e57\": rpc error: code = NotFound desc = could not find container \"ee75a98eb404ec9e47edac9fa7283fc2e7df0a865042bbdc6456173265c84e57\": container with ID starting with ee75a98eb404ec9e47edac9fa7283fc2e7df0a865042bbdc6456173265c84e57 not found: ID does not exist" Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.112539 4768 scope.go:117] "RemoveContainer" containerID="9fd0facb0bed81cf24660131e9a7b10a9b169d4f76fd95eaf7eb62e3db996109" Feb 23 18:49:45 crc kubenswrapper[4768]: E0223 18:49:45.115936 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fd0facb0bed81cf24660131e9a7b10a9b169d4f76fd95eaf7eb62e3db996109\": container with ID starting with 9fd0facb0bed81cf24660131e9a7b10a9b169d4f76fd95eaf7eb62e3db996109 not found: ID does not exist" containerID="9fd0facb0bed81cf24660131e9a7b10a9b169d4f76fd95eaf7eb62e3db996109" Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.115968 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fd0facb0bed81cf24660131e9a7b10a9b169d4f76fd95eaf7eb62e3db996109"} err="failed to get container status \"9fd0facb0bed81cf24660131e9a7b10a9b169d4f76fd95eaf7eb62e3db996109\": rpc error: code = NotFound desc = could not find container \"9fd0facb0bed81cf24660131e9a7b10a9b169d4f76fd95eaf7eb62e3db996109\": container with ID starting with 9fd0facb0bed81cf24660131e9a7b10a9b169d4f76fd95eaf7eb62e3db996109 not found: ID does not exist" Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.115987 4768 scope.go:117] "RemoveContainer" containerID="ccb11c19426071a44bb170dd3112d7f62a4b6d2b997f312150997d682568fea9" Feb 23 18:49:45 crc kubenswrapper[4768]: E0223 18:49:45.116931 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccb11c19426071a44bb170dd3112d7f62a4b6d2b997f312150997d682568fea9\": container with ID starting with ccb11c19426071a44bb170dd3112d7f62a4b6d2b997f312150997d682568fea9 not found: ID does not exist" containerID="ccb11c19426071a44bb170dd3112d7f62a4b6d2b997f312150997d682568fea9" Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.116963 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccb11c19426071a44bb170dd3112d7f62a4b6d2b997f312150997d682568fea9"} err="failed to get container status \"ccb11c19426071a44bb170dd3112d7f62a4b6d2b997f312150997d682568fea9\": rpc error: code = NotFound desc = could not find container \"ccb11c19426071a44bb170dd3112d7f62a4b6d2b997f312150997d682568fea9\": container with ID starting with ccb11c19426071a44bb170dd3112d7f62a4b6d2b997f312150997d682568fea9 not found: ID does not exist" Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.122310 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cd70-account-create-update-59jt8"] Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.134031 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9b69ea0-d838-4dcf-be89-7d7385b50387-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9b69ea0-d838-4dcf-be89-7d7385b50387" (UID: "b9b69ea0-d838-4dcf-be89-7d7385b50387"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:49:45 crc kubenswrapper[4768]: W0223 18:49:45.151067 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a997222_9831_4e01_ac9b_34383ec3649e.slice/crio-4bbd358ef14f174fe8265f7ac28c269e54e20b376e91cf2e5b96aa9ce7275f26 WatchSource:0}: Error finding container 4bbd358ef14f174fe8265f7ac28c269e54e20b376e91cf2e5b96aa9ce7275f26: Status 404 returned error can't find the container with id 4bbd358ef14f174fe8265f7ac28c269e54e20b376e91cf2e5b96aa9ce7275f26 Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.158653 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9b69ea0-d838-4dcf-be89-7d7385b50387-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.224488 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wh44n"] Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.226620 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wh44n"] Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.330499 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b69ea0-d838-4dcf-be89-7d7385b50387" path="/var/lib/kubelet/pods/b9b69ea0-d838-4dcf-be89-7d7385b50387/volumes" Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.331074 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e8b257-8aad-49d5-9746-c4506abf436f" path="/var/lib/kubelet/pods/c9e8b257-8aad-49d5-9746-c4506abf436f/volumes" Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.885236 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6g89c"] Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.938218 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-016f-account-create-update-vckb8" event={"ID":"3dee7c39-12a7-42a0-8c19-3420b5dcb63e","Type":"ContainerStarted","Data":"dfa2ccbe7828074aa2f65589ee7290d54da92d2f07a5bd8c8e8f4d4d781323b9"} Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.938284 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-016f-account-create-update-vckb8" event={"ID":"3dee7c39-12a7-42a0-8c19-3420b5dcb63e","Type":"ContainerStarted","Data":"6a61c9f690a49ffb9974f834fb4ae93bbba530d90fac82e31096526fbc0c4480"} Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.946563 4768 generic.go:334] "Generic (PLEG): container finished" podID="2936a6fe-a582-43cb-a967-e99ba45903ea" containerID="9de828e05cb8f4c10f2cd56f9df5d04f16f6b9c8a5b0b6810a8d2713efe6fc34" exitCode=0 Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.946628 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-t6rxx" event={"ID":"2936a6fe-a582-43cb-a967-e99ba45903ea","Type":"ContainerDied","Data":"9de828e05cb8f4c10f2cd56f9df5d04f16f6b9c8a5b0b6810a8d2713efe6fc34"} Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.946656 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-t6rxx" event={"ID":"2936a6fe-a582-43cb-a967-e99ba45903ea","Type":"ContainerStarted","Data":"bc79a5ed3fcd147cad443e4670b9a96515a711dc58d2d68eefdf73346236cc44"} Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.948638 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-195c-account-create-update-2pdfs" event={"ID":"7c008f19-c09b-4721-9a15-9851f9a516ab","Type":"ContainerStarted","Data":"ea16179cae17c36e9b4acb8220a5ba5d4a17774265e42c892beb0070e4ee8ded"} Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.948692 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-195c-account-create-update-2pdfs" event={"ID":"7c008f19-c09b-4721-9a15-9851f9a516ab","Type":"ContainerStarted","Data":"bc3a4ef57374c174e53a160222375a2990ae0118e8b8353ff39ca885affea65a"} Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.950745 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cd70-account-create-update-59jt8" event={"ID":"4a997222-9831-4e01-ac9b-34383ec3649e","Type":"ContainerStarted","Data":"7be0bee3f167d6086a636b359d7101c8428b5f2cf0b31976319d9c36ebd5eef1"} Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.950789 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cd70-account-create-update-59jt8" event={"ID":"4a997222-9831-4e01-ac9b-34383ec3649e","Type":"ContainerStarted","Data":"4bbd358ef14f174fe8265f7ac28c269e54e20b376e91cf2e5b96aa9ce7275f26"} Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.952713 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rfckb" event={"ID":"513bdad8-19c5-4fea-aaef-afecd7f21ab3","Type":"ContainerStarted","Data":"e83aa32b83d68c92344f8eba9aa0d5828014a1e082b665e1d8359a6873b1ea56"} Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.961182 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-c8v6w" event={"ID":"b252582c-b708-4d5d-be78-dc90b4bd3990","Type":"ContainerStarted","Data":"6c0812d55681ca06f2b4014b7f7e29397ac777c516457a203429d413e93b4c32"} Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.965019 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xbwcw" event={"ID":"786b1f7f-e1c7-4002-a1db-33c44f0ad098","Type":"ContainerStarted","Data":"e2e48ee46ef399153874e6c41c4fd558d4c91072ebe81da5c7a5af5671ac9490"} Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.965075 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xbwcw" event={"ID":"786b1f7f-e1c7-4002-a1db-33c44f0ad098","Type":"ContainerStarted","Data":"1eb20c9ca65b6cd409680ad1dd496d4a4c1f3c25aa71b12b8988f9db4661b352"} Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.965582 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-016f-account-create-update-vckb8" podStartSLOduration=2.965563281 podStartE2EDuration="2.965563281s" podCreationTimestamp="2026-02-23 18:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:49:45.959432043 +0000 UTC m=+981.349917853" watchObservedRunningTime="2026-02-23 18:49:45.965563281 +0000 UTC m=+981.356049081" Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.967392 4768 generic.go:334] "Generic (PLEG): container finished" podID="07034a67-ca3d-4e5f-936a-b32c08b85724" containerID="3a86350df92e3452a365a4c07e3d30237200dcc26fbe6d1785ea9447976bab99" exitCode=0 Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.967459 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9ntcv" event={"ID":"07034a67-ca3d-4e5f-936a-b32c08b85724","Type":"ContainerDied","Data":"3a86350df92e3452a365a4c07e3d30237200dcc26fbe6d1785ea9447976bab99"} Feb 23 18:49:45 crc kubenswrapper[4768]: I0223 18:49:45.988628 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-195c-account-create-update-2pdfs" podStartSLOduration=2.988612474 podStartE2EDuration="2.988612474s" podCreationTimestamp="2026-02-23 18:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:49:45.982059553 +0000 UTC m=+981.372545353" watchObservedRunningTime="2026-02-23 18:49:45.988612474 +0000 UTC m=+981.379098274" Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.031648 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-rfckb" podStartSLOduration=4.302965097 podStartE2EDuration="20.031632962s" podCreationTimestamp="2026-02-23 18:49:26 +0000 UTC" firstStartedPulling="2026-02-23 18:49:27.83454891 +0000 UTC m=+963.225034710" lastFinishedPulling="2026-02-23 18:49:43.563216775 +0000 UTC m=+978.953702575" observedRunningTime="2026-02-23 18:49:46.028910288 +0000 UTC m=+981.419396088" watchObservedRunningTime="2026-02-23 18:49:46.031632962 +0000 UTC m=+981.422118762" Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.055446 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cd70-account-create-update-59jt8" podStartSLOduration=3.055415424 podStartE2EDuration="3.055415424s" podCreationTimestamp="2026-02-23 18:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:49:46.046229082 +0000 UTC m=+981.436714902" watchObservedRunningTime="2026-02-23 18:49:46.055415424 +0000 UTC m=+981.445901224" Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.077354 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-xbwcw" podStartSLOduration=3.077323434 podStartE2EDuration="3.077323434s" podCreationTimestamp="2026-02-23 18:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:49:46.076078371 +0000 UTC m=+981.466564171" watchObservedRunningTime="2026-02-23 18:49:46.077323434 +0000 UTC m=+981.467809234" Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.676456 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-42xrb" Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.739024 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.790172 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wfnx\" (UniqueName: \"kubernetes.io/projected/8580c06d-92c6-47e7-99ff-21b0ea32de64-kube-api-access-7wfnx\") pod \"8580c06d-92c6-47e7-99ff-21b0ea32de64\" (UID: \"8580c06d-92c6-47e7-99ff-21b0ea32de64\") " Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.790297 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8580c06d-92c6-47e7-99ff-21b0ea32de64-operator-scripts\") pod \"8580c06d-92c6-47e7-99ff-21b0ea32de64\" (UID: \"8580c06d-92c6-47e7-99ff-21b0ea32de64\") " Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.791617 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8580c06d-92c6-47e7-99ff-21b0ea32de64-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8580c06d-92c6-47e7-99ff-21b0ea32de64" (UID: "8580c06d-92c6-47e7-99ff-21b0ea32de64"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.812998 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8580c06d-92c6-47e7-99ff-21b0ea32de64-kube-api-access-7wfnx" (OuterVolumeSpecName: "kube-api-access-7wfnx") pod "8580c06d-92c6-47e7-99ff-21b0ea32de64" (UID: "8580c06d-92c6-47e7-99ff-21b0ea32de64"). InnerVolumeSpecName "kube-api-access-7wfnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.892535 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8580c06d-92c6-47e7-99ff-21b0ea32de64-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.892565 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wfnx\" (UniqueName: \"kubernetes.io/projected/8580c06d-92c6-47e7-99ff-21b0ea32de64-kube-api-access-7wfnx\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.979742 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"5cb945dc28498ee69e869f226fca9cfd072da0dbcf635e0ab2519efe5d126b76"} Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.979824 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"a49de52a2ae5d84fda178c60a482b3a1e3ca24e809928500efe1634cf340e53b"} Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.983122 4768 generic.go:334] "Generic (PLEG): container finished" podID="4a997222-9831-4e01-ac9b-34383ec3649e" containerID="7be0bee3f167d6086a636b359d7101c8428b5f2cf0b31976319d9c36ebd5eef1" exitCode=0 Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.983181 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cd70-account-create-update-59jt8" event={"ID":"4a997222-9831-4e01-ac9b-34383ec3649e","Type":"ContainerDied","Data":"7be0bee3f167d6086a636b359d7101c8428b5f2cf0b31976319d9c36ebd5eef1"} Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.985768 4768 generic.go:334] "Generic (PLEG): container finished" podID="786b1f7f-e1c7-4002-a1db-33c44f0ad098" containerID="e2e48ee46ef399153874e6c41c4fd558d4c91072ebe81da5c7a5af5671ac9490" exitCode=0 Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.985878 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xbwcw" event={"ID":"786b1f7f-e1c7-4002-a1db-33c44f0ad098","Type":"ContainerDied","Data":"e2e48ee46ef399153874e6c41c4fd558d4c91072ebe81da5c7a5af5671ac9490"} Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.987597 4768 generic.go:334] "Generic (PLEG): container finished" podID="3dee7c39-12a7-42a0-8c19-3420b5dcb63e" containerID="dfa2ccbe7828074aa2f65589ee7290d54da92d2f07a5bd8c8e8f4d4d781323b9" exitCode=0 Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.987650 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-016f-account-create-update-vckb8" event={"ID":"3dee7c39-12a7-42a0-8c19-3420b5dcb63e","Type":"ContainerDied","Data":"dfa2ccbe7828074aa2f65589ee7290d54da92d2f07a5bd8c8e8f4d4d781323b9"} Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.989507 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-42xrb" event={"ID":"8580c06d-92c6-47e7-99ff-21b0ea32de64","Type":"ContainerDied","Data":"297b1897026ced53c027d43b2668222d7c0bbc3e0cbc09f397404002cb3433ab"} Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.989538 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="297b1897026ced53c027d43b2668222d7c0bbc3e0cbc09f397404002cb3433ab" Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.989590 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-42xrb" Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.994487 4768 generic.go:334] "Generic (PLEG): container finished" podID="7c008f19-c09b-4721-9a15-9851f9a516ab" containerID="ea16179cae17c36e9b4acb8220a5ba5d4a17774265e42c892beb0070e4ee8ded" exitCode=0 Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.994558 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-195c-account-create-update-2pdfs" event={"ID":"7c008f19-c09b-4721-9a15-9851f9a516ab","Type":"ContainerDied","Data":"ea16179cae17c36e9b4acb8220a5ba5d4a17774265e42c892beb0070e4ee8ded"} Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.996348 4768 generic.go:334] "Generic (PLEG): container finished" podID="8593dfa7-1021-4be4-8828-5cdbf51aef72" containerID="ba3253d794a64496ed2c8425703430c50e3657894edaf364708e81814ab88ca9" exitCode=0 Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.996661 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6g89c" event={"ID":"8593dfa7-1021-4be4-8828-5cdbf51aef72","Type":"ContainerDied","Data":"ba3253d794a64496ed2c8425703430c50e3657894edaf364708e81814ab88ca9"} Feb 23 18:49:46 crc kubenswrapper[4768]: I0223 18:49:46.996701 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6g89c" event={"ID":"8593dfa7-1021-4be4-8828-5cdbf51aef72","Type":"ContainerStarted","Data":"e068c5c84ba86baac6e0873937fd06b2304a939a31b938581391418a9b16743a"} Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.460937 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9ntcv" Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.611808 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j6vw\" (UniqueName: \"kubernetes.io/projected/07034a67-ca3d-4e5f-936a-b32c08b85724-kube-api-access-5j6vw\") pod \"07034a67-ca3d-4e5f-936a-b32c08b85724\" (UID: \"07034a67-ca3d-4e5f-936a-b32c08b85724\") " Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.612468 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07034a67-ca3d-4e5f-936a-b32c08b85724-operator-scripts\") pod \"07034a67-ca3d-4e5f-936a-b32c08b85724\" (UID: \"07034a67-ca3d-4e5f-936a-b32c08b85724\") " Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.614593 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07034a67-ca3d-4e5f-936a-b32c08b85724-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07034a67-ca3d-4e5f-936a-b32c08b85724" (UID: "07034a67-ca3d-4e5f-936a-b32c08b85724"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.638911 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07034a67-ca3d-4e5f-936a-b32c08b85724-kube-api-access-5j6vw" (OuterVolumeSpecName: "kube-api-access-5j6vw") pod "07034a67-ca3d-4e5f-936a-b32c08b85724" (UID: "07034a67-ca3d-4e5f-936a-b32c08b85724"). InnerVolumeSpecName "kube-api-access-5j6vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.698485 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-t6rxx" Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.720034 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j6vw\" (UniqueName: \"kubernetes.io/projected/07034a67-ca3d-4e5f-936a-b32c08b85724-kube-api-access-5j6vw\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.720081 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07034a67-ca3d-4e5f-936a-b32c08b85724-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.821172 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbnrt\" (UniqueName: \"kubernetes.io/projected/2936a6fe-a582-43cb-a967-e99ba45903ea-kube-api-access-hbnrt\") pod \"2936a6fe-a582-43cb-a967-e99ba45903ea\" (UID: \"2936a6fe-a582-43cb-a967-e99ba45903ea\") " Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.821275 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2936a6fe-a582-43cb-a967-e99ba45903ea-operator-scripts\") pod \"2936a6fe-a582-43cb-a967-e99ba45903ea\" (UID: \"2936a6fe-a582-43cb-a967-e99ba45903ea\") " Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.822463 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2936a6fe-a582-43cb-a967-e99ba45903ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2936a6fe-a582-43cb-a967-e99ba45903ea" (UID: "2936a6fe-a582-43cb-a967-e99ba45903ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.824439 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2936a6fe-a582-43cb-a967-e99ba45903ea-kube-api-access-hbnrt" (OuterVolumeSpecName: "kube-api-access-hbnrt") pod "2936a6fe-a582-43cb-a967-e99ba45903ea" (UID: "2936a6fe-a582-43cb-a967-e99ba45903ea"). InnerVolumeSpecName "kube-api-access-hbnrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.926372 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbnrt\" (UniqueName: \"kubernetes.io/projected/2936a6fe-a582-43cb-a967-e99ba45903ea-kube-api-access-hbnrt\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:47 crc kubenswrapper[4768]: I0223 18:49:47.926424 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2936a6fe-a582-43cb-a967-e99ba45903ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:48 crc kubenswrapper[4768]: I0223 18:49:48.009744 4768 generic.go:334] "Generic (PLEG): container finished" podID="8593dfa7-1021-4be4-8828-5cdbf51aef72" containerID="2192e23f1e71b2ab589ca3b38c8c01505deb60d0846045bf37ea57ef9ed7afad" exitCode=0 Feb 23 18:49:48 crc kubenswrapper[4768]: I0223 18:49:48.009850 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6g89c" event={"ID":"8593dfa7-1021-4be4-8828-5cdbf51aef72","Type":"ContainerDied","Data":"2192e23f1e71b2ab589ca3b38c8c01505deb60d0846045bf37ea57ef9ed7afad"} Feb 23 18:49:48 crc kubenswrapper[4768]: I0223 18:49:48.015205 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"935cb43ef8da63a9982d8c38e9a4693a5677f20f0f19cd8988625429c14802ba"} Feb 23 18:49:48 crc kubenswrapper[4768]: I0223 18:49:48.015239 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"9d441afe6761ca571a7f725a479539c8bea02ca4c58a99226ab74816e8e43d21"} Feb 23 18:49:48 crc kubenswrapper[4768]: I0223 18:49:48.017504 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-t6rxx" Feb 23 18:49:48 crc kubenswrapper[4768]: I0223 18:49:48.018011 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-t6rxx" event={"ID":"2936a6fe-a582-43cb-a967-e99ba45903ea","Type":"ContainerDied","Data":"bc79a5ed3fcd147cad443e4670b9a96515a711dc58d2d68eefdf73346236cc44"} Feb 23 18:49:48 crc kubenswrapper[4768]: I0223 18:49:48.018058 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc79a5ed3fcd147cad443e4670b9a96515a711dc58d2d68eefdf73346236cc44" Feb 23 18:49:48 crc kubenswrapper[4768]: I0223 18:49:48.020906 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-9ntcv" Feb 23 18:49:48 crc kubenswrapper[4768]: I0223 18:49:48.022422 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-9ntcv" event={"ID":"07034a67-ca3d-4e5f-936a-b32c08b85724","Type":"ContainerDied","Data":"26c8282aaaa507e6bfa469ee5a53b678d9c865a897db1cf574150a184b657632"} Feb 23 18:49:48 crc kubenswrapper[4768]: I0223 18:49:48.022462 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26c8282aaaa507e6bfa469ee5a53b678d9c865a897db1cf574150a184b657632" Feb 23 18:49:50 crc kubenswrapper[4768]: I0223 18:49:50.817304 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tjn26"] Feb 23 18:49:50 crc kubenswrapper[4768]: I0223 18:49:50.818086 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tjn26" podUID="eff6033d-2c50-420e-a764-e6e100dead6e" containerName="registry-server" containerID="cri-o://5d1e02f78f781b5eac88707d1e24e40242f97d495f10f4ed197ca79ef9e3b1a3" gracePeriod=2 Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.085971 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-195c-account-create-update-2pdfs" event={"ID":"7c008f19-c09b-4721-9a15-9851f9a516ab","Type":"ContainerDied","Data":"bc3a4ef57374c174e53a160222375a2990ae0118e8b8353ff39ca885affea65a"} Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.086468 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc3a4ef57374c174e53a160222375a2990ae0118e8b8353ff39ca885affea65a" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.095616 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-xbwcw" event={"ID":"786b1f7f-e1c7-4002-a1db-33c44f0ad098","Type":"ContainerDied","Data":"1eb20c9ca65b6cd409680ad1dd496d4a4c1f3c25aa71b12b8988f9db4661b352"} Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.095659 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1eb20c9ca65b6cd409680ad1dd496d4a4c1f3c25aa71b12b8988f9db4661b352" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.100089 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-195c-account-create-update-2pdfs" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.110464 4768 generic.go:334] "Generic (PLEG): container finished" podID="eff6033d-2c50-420e-a764-e6e100dead6e" containerID="5d1e02f78f781b5eac88707d1e24e40242f97d495f10f4ed197ca79ef9e3b1a3" exitCode=0 Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.110512 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjn26" event={"ID":"eff6033d-2c50-420e-a764-e6e100dead6e","Type":"ContainerDied","Data":"5d1e02f78f781b5eac88707d1e24e40242f97d495f10f4ed197ca79ef9e3b1a3"} Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.112226 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xbwcw" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.192197 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt7vk\" (UniqueName: \"kubernetes.io/projected/7c008f19-c09b-4721-9a15-9851f9a516ab-kube-api-access-wt7vk\") pod \"7c008f19-c09b-4721-9a15-9851f9a516ab\" (UID: \"7c008f19-c09b-4721-9a15-9851f9a516ab\") " Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.192314 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/786b1f7f-e1c7-4002-a1db-33c44f0ad098-operator-scripts\") pod \"786b1f7f-e1c7-4002-a1db-33c44f0ad098\" (UID: \"786b1f7f-e1c7-4002-a1db-33c44f0ad098\") " Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.192373 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c008f19-c09b-4721-9a15-9851f9a516ab-operator-scripts\") pod \"7c008f19-c09b-4721-9a15-9851f9a516ab\" (UID: \"7c008f19-c09b-4721-9a15-9851f9a516ab\") " Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.192420 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8c97\" (UniqueName: \"kubernetes.io/projected/786b1f7f-e1c7-4002-a1db-33c44f0ad098-kube-api-access-l8c97\") pod \"786b1f7f-e1c7-4002-a1db-33c44f0ad098\" (UID: \"786b1f7f-e1c7-4002-a1db-33c44f0ad098\") " Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.196578 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/786b1f7f-e1c7-4002-a1db-33c44f0ad098-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "786b1f7f-e1c7-4002-a1db-33c44f0ad098" (UID: "786b1f7f-e1c7-4002-a1db-33c44f0ad098"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.196808 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c008f19-c09b-4721-9a15-9851f9a516ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c008f19-c09b-4721-9a15-9851f9a516ab" (UID: "7c008f19-c09b-4721-9a15-9851f9a516ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.214515 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c008f19-c09b-4721-9a15-9851f9a516ab-kube-api-access-wt7vk" (OuterVolumeSpecName: "kube-api-access-wt7vk") pod "7c008f19-c09b-4721-9a15-9851f9a516ab" (UID: "7c008f19-c09b-4721-9a15-9851f9a516ab"). InnerVolumeSpecName "kube-api-access-wt7vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.216902 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/786b1f7f-e1c7-4002-a1db-33c44f0ad098-kube-api-access-l8c97" (OuterVolumeSpecName: "kube-api-access-l8c97") pod "786b1f7f-e1c7-4002-a1db-33c44f0ad098" (UID: "786b1f7f-e1c7-4002-a1db-33c44f0ad098"). InnerVolumeSpecName "kube-api-access-l8c97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.294702 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/786b1f7f-e1c7-4002-a1db-33c44f0ad098-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.294749 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c008f19-c09b-4721-9a15-9851f9a516ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.294761 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8c97\" (UniqueName: \"kubernetes.io/projected/786b1f7f-e1c7-4002-a1db-33c44f0ad098-kube-api-access-l8c97\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.294780 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt7vk\" (UniqueName: \"kubernetes.io/projected/7c008f19-c09b-4721-9a15-9851f9a516ab-kube-api-access-wt7vk\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.747471 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd70-account-create-update-59jt8" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.748020 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-016f-account-create-update-vckb8" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.914701 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tt8d\" (UniqueName: \"kubernetes.io/projected/4a997222-9831-4e01-ac9b-34383ec3649e-kube-api-access-6tt8d\") pod \"4a997222-9831-4e01-ac9b-34383ec3649e\" (UID: \"4a997222-9831-4e01-ac9b-34383ec3649e\") " Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.915126 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qftsv\" (UniqueName: \"kubernetes.io/projected/3dee7c39-12a7-42a0-8c19-3420b5dcb63e-kube-api-access-qftsv\") pod \"3dee7c39-12a7-42a0-8c19-3420b5dcb63e\" (UID: \"3dee7c39-12a7-42a0-8c19-3420b5dcb63e\") " Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.915181 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a997222-9831-4e01-ac9b-34383ec3649e-operator-scripts\") pod \"4a997222-9831-4e01-ac9b-34383ec3649e\" (UID: \"4a997222-9831-4e01-ac9b-34383ec3649e\") " Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.915208 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dee7c39-12a7-42a0-8c19-3420b5dcb63e-operator-scripts\") pod \"3dee7c39-12a7-42a0-8c19-3420b5dcb63e\" (UID: \"3dee7c39-12a7-42a0-8c19-3420b5dcb63e\") " Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.916731 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3dee7c39-12a7-42a0-8c19-3420b5dcb63e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3dee7c39-12a7-42a0-8c19-3420b5dcb63e" (UID: "3dee7c39-12a7-42a0-8c19-3420b5dcb63e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.918316 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a997222-9831-4e01-ac9b-34383ec3649e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a997222-9831-4e01-ac9b-34383ec3649e" (UID: "4a997222-9831-4e01-ac9b-34383ec3649e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.925753 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dee7c39-12a7-42a0-8c19-3420b5dcb63e-kube-api-access-qftsv" (OuterVolumeSpecName: "kube-api-access-qftsv") pod "3dee7c39-12a7-42a0-8c19-3420b5dcb63e" (UID: "3dee7c39-12a7-42a0-8c19-3420b5dcb63e"). InnerVolumeSpecName "kube-api-access-qftsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:51 crc kubenswrapper[4768]: I0223 18:49:51.932282 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a997222-9831-4e01-ac9b-34383ec3649e-kube-api-access-6tt8d" (OuterVolumeSpecName: "kube-api-access-6tt8d") pod "4a997222-9831-4e01-ac9b-34383ec3649e" (UID: "4a997222-9831-4e01-ac9b-34383ec3649e"). InnerVolumeSpecName "kube-api-access-6tt8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.018074 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a997222-9831-4e01-ac9b-34383ec3649e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.018106 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3dee7c39-12a7-42a0-8c19-3420b5dcb63e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.018116 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tt8d\" (UniqueName: \"kubernetes.io/projected/4a997222-9831-4e01-ac9b-34383ec3649e-kube-api-access-6tt8d\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.018128 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qftsv\" (UniqueName: \"kubernetes.io/projected/3dee7c39-12a7-42a0-8c19-3420b5dcb63e-kube-api-access-qftsv\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.142422 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-016f-account-create-update-vckb8" event={"ID":"3dee7c39-12a7-42a0-8c19-3420b5dcb63e","Type":"ContainerDied","Data":"6a61c9f690a49ffb9974f834fb4ae93bbba530d90fac82e31096526fbc0c4480"} Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.142730 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a61c9f690a49ffb9974f834fb4ae93bbba530d90fac82e31096526fbc0c4480" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.143480 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-016f-account-create-update-vckb8" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.186689 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6g89c" event={"ID":"8593dfa7-1021-4be4-8828-5cdbf51aef72","Type":"ContainerStarted","Data":"b5a4ba42e7caf6b5efb53f4ace09b6e74a8cdf8c765f84a06e20efe4add720d7"} Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.194406 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"15ec5cce55dd50312a56f925179cff27a076339bece28b4db86bf6d29c427e59"} Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.197622 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cd70-account-create-update-59jt8" event={"ID":"4a997222-9831-4e01-ac9b-34383ec3649e","Type":"ContainerDied","Data":"4bbd358ef14f174fe8265f7ac28c269e54e20b376e91cf2e5b96aa9ce7275f26"} Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.197668 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bbd358ef14f174fe8265f7ac28c269e54e20b376e91cf2e5b96aa9ce7275f26" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.197756 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cd70-account-create-update-59jt8" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.200592 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-195c-account-create-update-2pdfs" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.202174 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-c8v6w" event={"ID":"b252582c-b708-4d5d-be78-dc90b4bd3990","Type":"ContainerStarted","Data":"dc0f3d5faad33c49d050477fa8cafeb7f2419b4f3e81143cb6020cadff877def"} Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.202227 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-xbwcw" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.230726 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6g89c" podStartSLOduration=3.487532209 podStartE2EDuration="8.230668116s" podCreationTimestamp="2026-02-23 18:49:44 +0000 UTC" firstStartedPulling="2026-02-23 18:49:47.007460859 +0000 UTC m=+982.397946659" lastFinishedPulling="2026-02-23 18:49:51.750596726 +0000 UTC m=+987.141082566" observedRunningTime="2026-02-23 18:49:52.225037081 +0000 UTC m=+987.615522881" watchObservedRunningTime="2026-02-23 18:49:52.230668116 +0000 UTC m=+987.621153916" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.255426 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-c8v6w" podStartSLOduration=2.515891007 podStartE2EDuration="9.255407954s" podCreationTimestamp="2026-02-23 18:49:43 +0000 UTC" firstStartedPulling="2026-02-23 18:49:44.952479944 +0000 UTC m=+980.342965744" lastFinishedPulling="2026-02-23 18:49:51.691996891 +0000 UTC m=+987.082482691" observedRunningTime="2026-02-23 18:49:52.243051625 +0000 UTC m=+987.633537425" watchObservedRunningTime="2026-02-23 18:49:52.255407954 +0000 UTC m=+987.645893754" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.285621 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.447906 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nch8c\" (UniqueName: \"kubernetes.io/projected/eff6033d-2c50-420e-a764-e6e100dead6e-kube-api-access-nch8c\") pod \"eff6033d-2c50-420e-a764-e6e100dead6e\" (UID: \"eff6033d-2c50-420e-a764-e6e100dead6e\") " Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.448029 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff6033d-2c50-420e-a764-e6e100dead6e-catalog-content\") pod \"eff6033d-2c50-420e-a764-e6e100dead6e\" (UID: \"eff6033d-2c50-420e-a764-e6e100dead6e\") " Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.448111 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff6033d-2c50-420e-a764-e6e100dead6e-utilities\") pod \"eff6033d-2c50-420e-a764-e6e100dead6e\" (UID: \"eff6033d-2c50-420e-a764-e6e100dead6e\") " Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.453726 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff6033d-2c50-420e-a764-e6e100dead6e-utilities" (OuterVolumeSpecName: "utilities") pod "eff6033d-2c50-420e-a764-e6e100dead6e" (UID: "eff6033d-2c50-420e-a764-e6e100dead6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.459408 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff6033d-2c50-420e-a764-e6e100dead6e-kube-api-access-nch8c" (OuterVolumeSpecName: "kube-api-access-nch8c") pod "eff6033d-2c50-420e-a764-e6e100dead6e" (UID: "eff6033d-2c50-420e-a764-e6e100dead6e"). InnerVolumeSpecName "kube-api-access-nch8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.510807 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff6033d-2c50-420e-a764-e6e100dead6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eff6033d-2c50-420e-a764-e6e100dead6e" (UID: "eff6033d-2c50-420e-a764-e6e100dead6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.550562 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nch8c\" (UniqueName: \"kubernetes.io/projected/eff6033d-2c50-420e-a764-e6e100dead6e-kube-api-access-nch8c\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.550593 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff6033d-2c50-420e-a764-e6e100dead6e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:52 crc kubenswrapper[4768]: I0223 18:49:52.550602 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff6033d-2c50-420e-a764-e6e100dead6e-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:53 crc kubenswrapper[4768]: I0223 18:49:53.212115 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"e1ea32ee850397a6d7aa1694edb597bf70b62360880e79ebda0e3925fb22fcb1"} Feb 23 18:49:53 crc kubenswrapper[4768]: I0223 18:49:53.212564 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"42c9bbee6365d3e68a08f6149c48e8fc93f392e46f866ecae0015858d575b08e"} Feb 23 18:49:53 crc kubenswrapper[4768]: I0223 18:49:53.212588 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"bf8ea8129e1806e3b1b45a82e28672204a76027cc8512a25e309b6acb028eb5a"} Feb 23 18:49:53 crc kubenswrapper[4768]: I0223 18:49:53.214758 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tjn26" event={"ID":"eff6033d-2c50-420e-a764-e6e100dead6e","Type":"ContainerDied","Data":"d4caf62c69332936519bd99a3b001d115008aeb91c2e4d4ed6032abfb798d23a"} Feb 23 18:49:53 crc kubenswrapper[4768]: I0223 18:49:53.214824 4768 scope.go:117] "RemoveContainer" containerID="5d1e02f78f781b5eac88707d1e24e40242f97d495f10f4ed197ca79ef9e3b1a3" Feb 23 18:49:53 crc kubenswrapper[4768]: I0223 18:49:53.214992 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tjn26" Feb 23 18:49:53 crc kubenswrapper[4768]: I0223 18:49:53.237948 4768 scope.go:117] "RemoveContainer" containerID="3214e8a06caaa7ad269810877da491b75a8f00fdf204ce934ccae4b1c3827abc" Feb 23 18:49:53 crc kubenswrapper[4768]: I0223 18:49:53.264944 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tjn26"] Feb 23 18:49:53 crc kubenswrapper[4768]: I0223 18:49:53.271271 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tjn26"] Feb 23 18:49:53 crc kubenswrapper[4768]: I0223 18:49:53.273452 4768 scope.go:117] "RemoveContainer" containerID="8091ad0fa8d2246a68552a3250723263b303cf9a9e85190565f9e835c88e546e" Feb 23 18:49:53 crc kubenswrapper[4768]: I0223 18:49:53.317066 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eff6033d-2c50-420e-a764-e6e100dead6e" path="/var/lib/kubelet/pods/eff6033d-2c50-420e-a764-e6e100dead6e/volumes" Feb 23 18:49:55 crc kubenswrapper[4768]: I0223 18:49:55.088387 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:55 crc kubenswrapper[4768]: I0223 18:49:55.089044 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:55 crc kubenswrapper[4768]: I0223 18:49:55.206994 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:49:55 crc kubenswrapper[4768]: I0223 18:49:55.236263 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"dcec9d4c783a29f33ea4d7080d00219453935aad4e04d3600c5f20e7d4a884a3"} Feb 23 18:49:55 crc kubenswrapper[4768]: I0223 18:49:55.236307 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"3f05547269c1df691a8b84c7fbdad8bb4732a13d6788a3e9e40358eafaa5dc1a"} Feb 23 18:49:55 crc kubenswrapper[4768]: I0223 18:49:55.236319 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"51093a18f33b58cae21069fb26527a0a6b35fc7fbf67d4e497f7cf21c8b53bfc"} Feb 23 18:49:55 crc kubenswrapper[4768]: I0223 18:49:55.236329 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"7825574a199894daffd773fd293d522adcd204de9f10eddb6432f36554dcff15"} Feb 23 18:49:55 crc kubenswrapper[4768]: I0223 18:49:55.236337 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"3cc79ae4abbee0573ea400879bfbc5b9018b0f82333a451ead1c1f60ee978617"} Feb 23 18:49:55 crc kubenswrapper[4768]: I0223 18:49:55.238436 4768 generic.go:334] "Generic (PLEG): container finished" podID="513bdad8-19c5-4fea-aaef-afecd7f21ab3" containerID="e83aa32b83d68c92344f8eba9aa0d5828014a1e082b665e1d8359a6873b1ea56" exitCode=0 Feb 23 18:49:55 crc kubenswrapper[4768]: I0223 18:49:55.238682 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rfckb" event={"ID":"513bdad8-19c5-4fea-aaef-afecd7f21ab3","Type":"ContainerDied","Data":"e83aa32b83d68c92344f8eba9aa0d5828014a1e082b665e1d8359a6873b1ea56"} Feb 23 18:49:56 crc kubenswrapper[4768]: I0223 18:49:56.257998 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"39aa47494c36a39263722ed7217bfee4bb9abec28009efeb3ad3548cabf2c1db"} Feb 23 18:49:56 crc kubenswrapper[4768]: I0223 18:49:56.817781 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:56 crc kubenswrapper[4768]: I0223 18:49:56.927853 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-config-data\") pod \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " Feb 23 18:49:56 crc kubenswrapper[4768]: I0223 18:49:56.927928 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-combined-ca-bundle\") pod \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " Feb 23 18:49:56 crc kubenswrapper[4768]: I0223 18:49:56.927982 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cjjk\" (UniqueName: \"kubernetes.io/projected/513bdad8-19c5-4fea-aaef-afecd7f21ab3-kube-api-access-6cjjk\") pod \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " Feb 23 18:49:56 crc kubenswrapper[4768]: I0223 18:49:56.928032 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-db-sync-config-data\") pod \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\" (UID: \"513bdad8-19c5-4fea-aaef-afecd7f21ab3\") " Feb 23 18:49:56 crc kubenswrapper[4768]: I0223 18:49:56.934565 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513bdad8-19c5-4fea-aaef-afecd7f21ab3-kube-api-access-6cjjk" (OuterVolumeSpecName: "kube-api-access-6cjjk") pod "513bdad8-19c5-4fea-aaef-afecd7f21ab3" (UID: "513bdad8-19c5-4fea-aaef-afecd7f21ab3"). InnerVolumeSpecName "kube-api-access-6cjjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:56 crc kubenswrapper[4768]: I0223 18:49:56.934567 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "513bdad8-19c5-4fea-aaef-afecd7f21ab3" (UID: "513bdad8-19c5-4fea-aaef-afecd7f21ab3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:49:56 crc kubenswrapper[4768]: I0223 18:49:56.961525 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "513bdad8-19c5-4fea-aaef-afecd7f21ab3" (UID: "513bdad8-19c5-4fea-aaef-afecd7f21ab3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:49:56 crc kubenswrapper[4768]: I0223 18:49:56.972781 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-config-data" (OuterVolumeSpecName: "config-data") pod "513bdad8-19c5-4fea-aaef-afecd7f21ab3" (UID: "513bdad8-19c5-4fea-aaef-afecd7f21ab3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.031370 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.031429 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.031453 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cjjk\" (UniqueName: \"kubernetes.io/projected/513bdad8-19c5-4fea-aaef-afecd7f21ab3-kube-api-access-6cjjk\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.031469 4768 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/513bdad8-19c5-4fea-aaef-afecd7f21ab3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.268051 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-rfckb" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.268100 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-rfckb" event={"ID":"513bdad8-19c5-4fea-aaef-afecd7f21ab3","Type":"ContainerDied","Data":"cc64bba9567279e67d79c99705fe47992bde06adaed0a8fc237ec86c6ef61877"} Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.268173 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc64bba9567279e67d79c99705fe47992bde06adaed0a8fc237ec86c6ef61877" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.270522 4768 generic.go:334] "Generic (PLEG): container finished" podID="b252582c-b708-4d5d-be78-dc90b4bd3990" containerID="dc0f3d5faad33c49d050477fa8cafeb7f2419b4f3e81143cb6020cadff877def" exitCode=0 Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.270572 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-c8v6w" event={"ID":"b252582c-b708-4d5d-be78-dc90b4bd3990","Type":"ContainerDied","Data":"dc0f3d5faad33c49d050477fa8cafeb7f2419b4f3e81143cb6020cadff877def"} Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.745962 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-mptpp"] Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746469 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513bdad8-19c5-4fea-aaef-afecd7f21ab3" containerName="glance-db-sync" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746491 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="513bdad8-19c5-4fea-aaef-afecd7f21ab3" containerName="glance-db-sync" Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746508 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8580c06d-92c6-47e7-99ff-21b0ea32de64" containerName="mariadb-account-create-update" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746519 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8580c06d-92c6-47e7-99ff-21b0ea32de64" containerName="mariadb-account-create-update" Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746527 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2936a6fe-a582-43cb-a967-e99ba45903ea" containerName="mariadb-database-create" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746537 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2936a6fe-a582-43cb-a967-e99ba45903ea" containerName="mariadb-database-create" Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746550 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff6033d-2c50-420e-a764-e6e100dead6e" containerName="registry-server" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746557 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff6033d-2c50-420e-a764-e6e100dead6e" containerName="registry-server" Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746581 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dee7c39-12a7-42a0-8c19-3420b5dcb63e" containerName="mariadb-account-create-update" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746588 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dee7c39-12a7-42a0-8c19-3420b5dcb63e" containerName="mariadb-account-create-update" Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746609 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b69ea0-d838-4dcf-be89-7d7385b50387" containerName="extract-content" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746617 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b69ea0-d838-4dcf-be89-7d7385b50387" containerName="extract-content" Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746628 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b69ea0-d838-4dcf-be89-7d7385b50387" containerName="registry-server" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746636 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b69ea0-d838-4dcf-be89-7d7385b50387" containerName="registry-server" Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746653 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a997222-9831-4e01-ac9b-34383ec3649e" containerName="mariadb-account-create-update" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746661 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a997222-9831-4e01-ac9b-34383ec3649e" containerName="mariadb-account-create-update" Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746673 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff6033d-2c50-420e-a764-e6e100dead6e" containerName="extract-utilities" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746680 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff6033d-2c50-420e-a764-e6e100dead6e" containerName="extract-utilities" Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746695 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07034a67-ca3d-4e5f-936a-b32c08b85724" containerName="mariadb-database-create" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746703 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="07034a67-ca3d-4e5f-936a-b32c08b85724" containerName="mariadb-database-create" Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746720 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff6033d-2c50-420e-a764-e6e100dead6e" containerName="extract-content" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746727 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff6033d-2c50-420e-a764-e6e100dead6e" containerName="extract-content" Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746739 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c008f19-c09b-4721-9a15-9851f9a516ab" containerName="mariadb-account-create-update" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746746 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c008f19-c09b-4721-9a15-9851f9a516ab" containerName="mariadb-account-create-update" Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746760 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786b1f7f-e1c7-4002-a1db-33c44f0ad098" containerName="mariadb-database-create" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746767 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="786b1f7f-e1c7-4002-a1db-33c44f0ad098" containerName="mariadb-database-create" Feb 23 18:49:57 crc kubenswrapper[4768]: E0223 18:49:57.746775 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b69ea0-d838-4dcf-be89-7d7385b50387" containerName="extract-utilities" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746782 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b69ea0-d838-4dcf-be89-7d7385b50387" containerName="extract-utilities" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746949 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff6033d-2c50-420e-a764-e6e100dead6e" containerName="registry-server" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746965 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="786b1f7f-e1c7-4002-a1db-33c44f0ad098" containerName="mariadb-database-create" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746975 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="07034a67-ca3d-4e5f-936a-b32c08b85724" containerName="mariadb-database-create" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746988 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dee7c39-12a7-42a0-8c19-3420b5dcb63e" containerName="mariadb-account-create-update" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.746995 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b69ea0-d838-4dcf-be89-7d7385b50387" containerName="registry-server" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.747007 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c008f19-c09b-4721-9a15-9851f9a516ab" containerName="mariadb-account-create-update" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.747017 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8580c06d-92c6-47e7-99ff-21b0ea32de64" containerName="mariadb-account-create-update" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.747023 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2936a6fe-a582-43cb-a967-e99ba45903ea" containerName="mariadb-database-create" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.747033 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a997222-9831-4e01-ac9b-34383ec3649e" containerName="mariadb-account-create-update" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.747040 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="513bdad8-19c5-4fea-aaef-afecd7f21ab3" containerName="glance-db-sync" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.748104 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.777219 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-mptpp"] Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.848527 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.848613 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-config\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.848682 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.848706 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc2hf\" (UniqueName: \"kubernetes.io/projected/85c2b171-151e-42fe-b653-b9edfbd33766-kube-api-access-bc2hf\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.848876 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.950201 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.950312 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-config\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.950366 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.950403 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc2hf\" (UniqueName: \"kubernetes.io/projected/85c2b171-151e-42fe-b653-b9edfbd33766-kube-api-access-bc2hf\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.950439 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.951711 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-ovsdbserver-nb\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.951736 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-dns-svc\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.951802 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-config\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.951831 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-ovsdbserver-sb\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:57 crc kubenswrapper[4768]: I0223 18:49:57.989768 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc2hf\" (UniqueName: \"kubernetes.io/projected/85c2b171-151e-42fe-b653-b9edfbd33766-kube-api-access-bc2hf\") pod \"dnsmasq-dns-5b946c75cc-mptpp\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.067504 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.294665 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"c2932248-edbb-4073-8a18-d076462b4201","Type":"ContainerStarted","Data":"3386e4142130a03f7e5fc2db5e6a2be5652fc9db231e2d9a593dde37b18cc239"} Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.346200 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.322519405 podStartE2EDuration="50.346171899s" podCreationTimestamp="2026-02-23 18:49:08 +0000 UTC" firstStartedPulling="2026-02-23 18:49:44.077186532 +0000 UTC m=+979.467672332" lastFinishedPulling="2026-02-23 18:49:54.100839016 +0000 UTC m=+989.491324826" observedRunningTime="2026-02-23 18:49:58.334110579 +0000 UTC m=+993.724596409" watchObservedRunningTime="2026-02-23 18:49:58.346171899 +0000 UTC m=+993.736657739" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.558634 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-mptpp"] Feb 23 18:49:58 crc kubenswrapper[4768]: W0223 18:49:58.609363 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85c2b171_151e_42fe_b653_b9edfbd33766.slice/crio-6be651833c9a51d1774bafb15d21b6de684bb0aad999e1d4d19c435d5a12aee0 WatchSource:0}: Error finding container 6be651833c9a51d1774bafb15d21b6de684bb0aad999e1d4d19c435d5a12aee0: Status 404 returned error can't find the container with id 6be651833c9a51d1774bafb15d21b6de684bb0aad999e1d4d19c435d5a12aee0 Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.750107 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-mptpp"] Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.770675 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-gls4j"] Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.771996 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.774289 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.796860 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-gls4j"] Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.806688 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-c8v6w" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.872027 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b252582c-b708-4d5d-be78-dc90b4bd3990-combined-ca-bundle\") pod \"b252582c-b708-4d5d-be78-dc90b4bd3990\" (UID: \"b252582c-b708-4d5d-be78-dc90b4bd3990\") " Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.872114 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98dwv\" (UniqueName: \"kubernetes.io/projected/b252582c-b708-4d5d-be78-dc90b4bd3990-kube-api-access-98dwv\") pod \"b252582c-b708-4d5d-be78-dc90b4bd3990\" (UID: \"b252582c-b708-4d5d-be78-dc90b4bd3990\") " Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.872191 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b252582c-b708-4d5d-be78-dc90b4bd3990-config-data\") pod \"b252582c-b708-4d5d-be78-dc90b4bd3990\" (UID: \"b252582c-b708-4d5d-be78-dc90b4bd3990\") " Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.872517 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.872584 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-config\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.872602 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsc69\" (UniqueName: \"kubernetes.io/projected/aecc462a-6fdb-41b1-b4ea-be7012b807cb-kube-api-access-tsc69\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.872666 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.872691 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.872710 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.879921 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b252582c-b708-4d5d-be78-dc90b4bd3990-kube-api-access-98dwv" (OuterVolumeSpecName: "kube-api-access-98dwv") pod "b252582c-b708-4d5d-be78-dc90b4bd3990" (UID: "b252582c-b708-4d5d-be78-dc90b4bd3990"). InnerVolumeSpecName "kube-api-access-98dwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.924515 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b252582c-b708-4d5d-be78-dc90b4bd3990-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b252582c-b708-4d5d-be78-dc90b4bd3990" (UID: "b252582c-b708-4d5d-be78-dc90b4bd3990"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.936937 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b252582c-b708-4d5d-be78-dc90b4bd3990-config-data" (OuterVolumeSpecName: "config-data") pod "b252582c-b708-4d5d-be78-dc90b4bd3990" (UID: "b252582c-b708-4d5d-be78-dc90b4bd3990"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.981871 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.982470 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-config\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.982494 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsc69\" (UniqueName: \"kubernetes.io/projected/aecc462a-6fdb-41b1-b4ea-be7012b807cb-kube-api-access-tsc69\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.982515 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.982550 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.982575 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.982688 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b252582c-b708-4d5d-be78-dc90b4bd3990-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.982701 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98dwv\" (UniqueName: \"kubernetes.io/projected/b252582c-b708-4d5d-be78-dc90b4bd3990-kube-api-access-98dwv\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.982713 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b252582c-b708-4d5d-be78-dc90b4bd3990-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.983980 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.984026 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-config\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.984136 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.984408 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:58 crc kubenswrapper[4768]: I0223 18:49:58.984862 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.002337 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsc69\" (UniqueName: \"kubernetes.io/projected/aecc462a-6fdb-41b1-b4ea-be7012b807cb-kube-api-access-tsc69\") pod \"dnsmasq-dns-74f6bcbc87-gls4j\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.088082 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.315769 4768 generic.go:334] "Generic (PLEG): container finished" podID="85c2b171-151e-42fe-b653-b9edfbd33766" containerID="60580109300dee7dffd6dd08461641f54a380b553a7433be4afa2359316f20a1" exitCode=0 Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.315851 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" event={"ID":"85c2b171-151e-42fe-b653-b9edfbd33766","Type":"ContainerDied","Data":"60580109300dee7dffd6dd08461641f54a380b553a7433be4afa2359316f20a1"} Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.315891 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" event={"ID":"85c2b171-151e-42fe-b653-b9edfbd33766","Type":"ContainerStarted","Data":"6be651833c9a51d1774bafb15d21b6de684bb0aad999e1d4d19c435d5a12aee0"} Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.331226 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-c8v6w" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.350403 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-c8v6w" event={"ID":"b252582c-b708-4d5d-be78-dc90b4bd3990","Type":"ContainerDied","Data":"6c0812d55681ca06f2b4014b7f7e29397ac777c516457a203429d413e93b4c32"} Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.350461 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c0812d55681ca06f2b4014b7f7e29397ac777c516457a203429d413e93b4c32" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.570049 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-gls4j"] Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.641322 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-gls4j"] Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.659537 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4s5nx"] Feb 23 18:49:59 crc kubenswrapper[4768]: E0223 18:49:59.660013 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b252582c-b708-4d5d-be78-dc90b4bd3990" containerName="keystone-db-sync" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.660034 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b252582c-b708-4d5d-be78-dc90b4bd3990" containerName="keystone-db-sync" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.660218 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b252582c-b708-4d5d-be78-dc90b4bd3990" containerName="keystone-db-sync" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.716227 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.720880 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.723370 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4s5nx"] Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.750111 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.750446 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.750552 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.750706 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ftws5" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.750819 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.772905 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-lnmfr"] Feb 23 18:49:59 crc kubenswrapper[4768]: E0223 18:49:59.773601 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c2b171-151e-42fe-b653-b9edfbd33766" containerName="init" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.773621 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c2b171-151e-42fe-b653-b9edfbd33766" containerName="init" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.773871 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="85c2b171-151e-42fe-b653-b9edfbd33766" containerName="init" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.775097 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.806294 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-ovsdbserver-nb\") pod \"85c2b171-151e-42fe-b653-b9edfbd33766\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.806437 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-ovsdbserver-sb\") pod \"85c2b171-151e-42fe-b653-b9edfbd33766\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.806536 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc2hf\" (UniqueName: \"kubernetes.io/projected/85c2b171-151e-42fe-b653-b9edfbd33766-kube-api-access-bc2hf\") pod \"85c2b171-151e-42fe-b653-b9edfbd33766\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.806590 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-dns-svc\") pod \"85c2b171-151e-42fe-b653-b9edfbd33766\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.806694 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-config\") pod \"85c2b171-151e-42fe-b653-b9edfbd33766\" (UID: \"85c2b171-151e-42fe-b653-b9edfbd33766\") " Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.806989 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.807021 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwdcr\" (UniqueName: \"kubernetes.io/projected/9050bc07-2760-48bd-9005-7406de7a76ce-kube-api-access-qwdcr\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.807046 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-config\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.807079 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-scripts\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.807102 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-dns-svc\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.807131 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-credential-keys\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.807166 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.807195 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-combined-ca-bundle\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.807240 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-config-data\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.807284 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-fernet-keys\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.807311 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.807333 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcgsn\" (UniqueName: \"kubernetes.io/projected/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-kube-api-access-jcgsn\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.843388 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c2b171-151e-42fe-b653-b9edfbd33766-kube-api-access-bc2hf" (OuterVolumeSpecName: "kube-api-access-bc2hf") pod "85c2b171-151e-42fe-b653-b9edfbd33766" (UID: "85c2b171-151e-42fe-b653-b9edfbd33766"). InnerVolumeSpecName "kube-api-access-bc2hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.850519 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-lnmfr"] Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.916325 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.916993 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwdcr\" (UniqueName: \"kubernetes.io/projected/9050bc07-2760-48bd-9005-7406de7a76ce-kube-api-access-qwdcr\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.917030 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-config\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.917085 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-scripts\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.917135 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-dns-svc\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.917213 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-credential-keys\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.917402 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.917471 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-combined-ca-bundle\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.917640 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-config-data\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.917687 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-fernet-keys\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.921334 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-dns-svc\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.921719 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.921794 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcgsn\" (UniqueName: \"kubernetes.io/projected/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-kube-api-access-jcgsn\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.921796 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.923877 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.926089 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-config\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.926855 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc2hf\" (UniqueName: \"kubernetes.io/projected/85c2b171-151e-42fe-b653-b9edfbd33766-kube-api-access-bc2hf\") on node \"crc\" DevicePath \"\"" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.927802 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.968782 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-fernet-keys\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.970816 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-credential-keys\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.976361 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-config-data\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.976480 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-zv7fq"] Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.982238 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-scripts\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.982973 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-combined-ca-bundle\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:49:59 crc kubenswrapper[4768]: I0223 18:49:59.987381 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.000784 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.001063 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.001204 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mxzv8" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.012322 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "85c2b171-151e-42fe-b653-b9edfbd33766" (UID: "85c2b171-151e-42fe-b653-b9edfbd33766"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.016482 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcgsn\" (UniqueName: \"kubernetes.io/projected/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-kube-api-access-jcgsn\") pod \"dnsmasq-dns-847c4cc679-lnmfr\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.025640 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "85c2b171-151e-42fe-b653-b9edfbd33766" (UID: "85c2b171-151e-42fe-b653-b9edfbd33766"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.027048 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwdcr\" (UniqueName: \"kubernetes.io/projected/9050bc07-2760-48bd-9005-7406de7a76ce-kube-api-access-qwdcr\") pod \"keystone-bootstrap-4s5nx\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.028616 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55pqc\" (UniqueName: \"kubernetes.io/projected/d689e8c1-2c72-4fe1-890c-ba586628dd4b-kube-api-access-55pqc\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.034041 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d689e8c1-2c72-4fe1-890c-ba586628dd4b-etc-machine-id\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.034096 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-scripts\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.034158 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-combined-ca-bundle\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.034237 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-db-sync-config-data\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.034339 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-config-data\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.034551 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.034798 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.036794 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-zv7fq"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.067380 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "85c2b171-151e-42fe-b653-b9edfbd33766" (UID: "85c2b171-151e-42fe-b653-b9edfbd33766"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.071592 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-config" (OuterVolumeSpecName: "config") pod "85c2b171-151e-42fe-b653-b9edfbd33766" (UID: "85c2b171-151e-42fe-b653-b9edfbd33766"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.078370 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-bc449878f-7drht"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.080261 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.092146 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.092960 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.093015 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-ph5r2" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.092972 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.105683 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.106313 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-g998f"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.107495 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g998f" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.112148 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gl94w" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137168 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-config-data\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137262 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5tfm\" (UniqueName: \"kubernetes.io/projected/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-kube-api-access-k5tfm\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137324 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-combined-ca-bundle\") pod \"neutron-db-sync-g998f\" (UID: \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\") " pod="openstack/neutron-db-sync-g998f" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137347 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-config\") pod \"neutron-db-sync-g998f\" (UID: \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\") " pod="openstack/neutron-db-sync-g998f" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137368 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6d8j\" (UniqueName: \"kubernetes.io/projected/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-kube-api-access-r6d8j\") pod \"neutron-db-sync-g998f\" (UID: \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\") " pod="openstack/neutron-db-sync-g998f" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137392 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-scripts\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137411 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-config-data\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137446 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55pqc\" (UniqueName: \"kubernetes.io/projected/d689e8c1-2c72-4fe1-890c-ba586628dd4b-kube-api-access-55pqc\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137486 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d689e8c1-2c72-4fe1-890c-ba586628dd4b-etc-machine-id\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137505 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-scripts\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137526 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-logs\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137547 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-combined-ca-bundle\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137570 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-horizon-secret-key\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137604 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-db-sync-config-data\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137662 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.137675 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85c2b171-151e-42fe-b653-b9edfbd33766-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.140978 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.141240 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.141422 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.141517 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d689e8c1-2c72-4fe1-890c-ba586628dd4b-etc-machine-id\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.143576 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.149683 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-db-sync-config-data\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.151931 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.152214 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.154266 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-config-data\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.155377 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-scripts\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.179221 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-combined-ca-bundle\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.202853 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55pqc\" (UniqueName: \"kubernetes.io/projected/d689e8c1-2c72-4fe1-890c-ba586628dd4b-kube-api-access-55pqc\") pod \"cinder-db-sync-zv7fq\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.226871 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.251674 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.280323 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bc449878f-7drht"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.295958 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-logs\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.296173 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-horizon-secret-key\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.318039 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-g998f"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.435838 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5tfm\" (UniqueName: \"kubernetes.io/projected/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-kube-api-access-k5tfm\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.435924 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-combined-ca-bundle\") pod \"neutron-db-sync-g998f\" (UID: \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\") " pod="openstack/neutron-db-sync-g998f" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.435965 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-config\") pod \"neutron-db-sync-g998f\" (UID: \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\") " pod="openstack/neutron-db-sync-g998f" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.435998 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6d8j\" (UniqueName: \"kubernetes.io/projected/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-kube-api-access-r6d8j\") pod \"neutron-db-sync-g998f\" (UID: \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\") " pod="openstack/neutron-db-sync-g998f" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.436020 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-scripts\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.436041 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-config-data\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.436511 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-horizon-secret-key\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.437109 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-logs\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.437230 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-config-data\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.438702 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-scripts\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.451623 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-config\") pod \"neutron-db-sync-g998f\" (UID: \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\") " pod="openstack/neutron-db-sync-g998f" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.477731 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6d8j\" (UniqueName: \"kubernetes.io/projected/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-kube-api-access-r6d8j\") pod \"neutron-db-sync-g998f\" (UID: \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\") " pod="openstack/neutron-db-sync-g998f" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.494942 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-combined-ca-bundle\") pod \"neutron-db-sync-g998f\" (UID: \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\") " pod="openstack/neutron-db-sync-g998f" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.495018 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.495850 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5tfm\" (UniqueName: \"kubernetes.io/projected/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-kube-api-access-k5tfm\") pod \"horizon-bc449878f-7drht\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.547176 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40891100-89e6-4bd1-9ea0-8707548ffee8-run-httpd\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.547203 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-config-data\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.547219 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.547283 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40891100-89e6-4bd1-9ea0-8707548ffee8-log-httpd\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.547346 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-scripts\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.549016 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj6rd\" (UniqueName: \"kubernetes.io/projected/40891100-89e6-4bd1-9ea0-8707548ffee8-kube-api-access-hj6rd\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.549039 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.586449 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.620306 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-n58l7"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.621495 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.624643 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" event={"ID":"85c2b171-151e-42fe-b653-b9edfbd33766","Type":"ContainerDied","Data":"6be651833c9a51d1774bafb15d21b6de684bb0aad999e1d4d19c435d5a12aee0"} Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.624715 4768 scope.go:117] "RemoveContainer" containerID="60580109300dee7dffd6dd08461641f54a380b553a7433be4afa2359316f20a1" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.624868 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b946c75cc-mptpp" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.634197 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.634448 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sd75x" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.634461 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.650378 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-scripts\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.650428 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj6rd\" (UniqueName: \"kubernetes.io/projected/40891100-89e6-4bd1-9ea0-8707548ffee8-kube-api-access-hj6rd\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.650453 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.650688 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40891100-89e6-4bd1-9ea0-8707548ffee8-run-httpd\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.650819 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-config-data\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.650855 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.650894 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40891100-89e6-4bd1-9ea0-8707548ffee8-log-httpd\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.651514 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40891100-89e6-4bd1-9ea0-8707548ffee8-log-httpd\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.657621 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" event={"ID":"aecc462a-6fdb-41b1-b4ea-be7012b807cb","Type":"ContainerStarted","Data":"b5c911bd0e09dc6e0e342a13389c4ef828190274ba7d2f9b7db623b763729c3e"} Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.659879 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40891100-89e6-4bd1-9ea0-8707548ffee8-run-httpd\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.670470 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.689614 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-config-data\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.693169 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-scripts\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.695950 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.706879 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj6rd\" (UniqueName: \"kubernetes.io/projected/40891100-89e6-4bd1-9ea0-8707548ffee8-kube-api-access-hj6rd\") pod \"ceilometer-0\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.731349 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n58l7"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.737913 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g998f" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.753545 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-config-data\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.753608 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-combined-ca-bundle\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.753763 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkhn2\" (UniqueName: \"kubernetes.io/projected/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-kube-api-access-lkhn2\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.753898 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-logs\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.753921 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-scripts\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.757457 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-hcnm6"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.759145 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hcnm6" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.762272 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.762640 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fkv4b" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.786313 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.842581 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.845661 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.855875 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-db-sync-config-data\") pod \"barbican-db-sync-hcnm6\" (UID: \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\") " pod="openstack/barbican-db-sync-hcnm6" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.855937 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkhn2\" (UniqueName: \"kubernetes.io/projected/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-kube-api-access-lkhn2\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.856020 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-combined-ca-bundle\") pod \"barbican-db-sync-hcnm6\" (UID: \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\") " pod="openstack/barbican-db-sync-hcnm6" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.856047 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-logs\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.856069 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-scripts\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.856091 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6q9\" (UniqueName: \"kubernetes.io/projected/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-kube-api-access-mb6q9\") pod \"barbican-db-sync-hcnm6\" (UID: \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\") " pod="openstack/barbican-db-sync-hcnm6" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.856147 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-config-data\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.856190 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-combined-ca-bundle\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.860117 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.860527 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.860644 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-bmfg2" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.861211 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-logs\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.862139 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hcnm6"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.868399 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-combined-ca-bundle\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.878378 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-config-data\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.889016 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-scripts\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.893601 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkhn2\" (UniqueName: \"kubernetes.io/projected/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-kube-api-access-lkhn2\") pod \"placement-db-sync-n58l7\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.903792 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.925508 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-lnmfr"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.958090 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.958150 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc28h\" (UniqueName: \"kubernetes.io/projected/7ce15e04-156c-4ec7-a908-a36a712aac90-kube-api-access-dc28h\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.958199 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-db-sync-config-data\") pod \"barbican-db-sync-hcnm6\" (UID: \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\") " pod="openstack/barbican-db-sync-hcnm6" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.958226 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce15e04-156c-4ec7-a908-a36a712aac90-logs\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.958296 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ce15e04-156c-4ec7-a908-a36a712aac90-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.958325 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-config-data\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.959465 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-combined-ca-bundle\") pod \"barbican-db-sync-hcnm6\" (UID: \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\") " pod="openstack/barbican-db-sync-hcnm6" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.959501 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb6q9\" (UniqueName: \"kubernetes.io/projected/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-kube-api-access-mb6q9\") pod \"barbican-db-sync-hcnm6\" (UID: \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\") " pod="openstack/barbican-db-sync-hcnm6" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.959526 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.959557 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-scripts\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.972712 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-db-sync-config-data\") pod \"barbican-db-sync-hcnm6\" (UID: \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\") " pod="openstack/barbican-db-sync-hcnm6" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.974076 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-combined-ca-bundle\") pod \"barbican-db-sync-hcnm6\" (UID: \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\") " pod="openstack/barbican-db-sync-hcnm6" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.975381 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qbknd"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.977725 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.994301 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qbknd"] Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.994714 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:00 crc kubenswrapper[4768]: I0223 18:50:00.997593 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb6q9\" (UniqueName: \"kubernetes.io/projected/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-kube-api-access-mb6q9\") pod \"barbican-db-sync-hcnm6\" (UID: \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\") " pod="openstack/barbican-db-sync-hcnm6" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.001081 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-845b48bb89-v6rjx"] Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.003145 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.009579 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-845b48bb89-v6rjx"] Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.028159 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.032605 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.034387 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.056750 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.063418 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.063492 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.063524 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-scripts\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.063561 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.063592 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-config\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.063619 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.063658 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pw2c\" (UniqueName: \"kubernetes.io/projected/21d677b5-cbc7-4501-addc-9e06c0bb8990-kube-api-access-4pw2c\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.063692 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc28h\" (UniqueName: \"kubernetes.io/projected/7ce15e04-156c-4ec7-a908-a36a712aac90-kube-api-access-dc28h\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.063733 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce15e04-156c-4ec7-a908-a36a712aac90-logs\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.063751 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ce15e04-156c-4ec7-a908-a36a712aac90-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.063770 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.063806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-config-data\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.063826 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.068674 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.072715 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce15e04-156c-4ec7-a908-a36a712aac90-logs\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.072867 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ce15e04-156c-4ec7-a908-a36a712aac90-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.076059 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-scripts\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.077216 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.079872 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-config-data\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.094502 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hcnm6" Feb 23 18:50:01 crc kubenswrapper[4768]: W0223 18:50:01.102363 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9050bc07_2760_48bd_9005_7406de7a76ce.slice/crio-60aed3759735afc039d1894b0614b191d0172e2bc17c965961c44efe4b652607 WatchSource:0}: Error finding container 60aed3759735afc039d1894b0614b191d0172e2bc17c965961c44efe4b652607: Status 404 returned error can't find the container with id 60aed3759735afc039d1894b0614b191d0172e2bc17c965961c44efe4b652607 Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.113859 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.134018 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc28h\" (UniqueName: \"kubernetes.io/projected/7ce15e04-156c-4ec7-a908-a36a712aac90-kube-api-access-dc28h\") pod \"glance-default-external-api-0\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.173382 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pw2c\" (UniqueName: \"kubernetes.io/projected/21d677b5-cbc7-4501-addc-9e06c0bb8990-kube-api-access-4pw2c\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.173496 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.173552 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.173697 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.173738 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.173786 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-config\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.175747 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.175952 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-config\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.179141 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.180213 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.180656 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.185347 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-mptpp"] Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.201779 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.205232 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b946c75cc-mptpp"] Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.207328 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pw2c\" (UniqueName: \"kubernetes.io/projected/21d677b5-cbc7-4501-addc-9e06c0bb8990-kube-api-access-4pw2c\") pod \"dnsmasq-dns-785d8bcb8c-qbknd\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.273964 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4s5nx"] Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.288535 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4be8edd4-c691-4b23-903c-467ffafb5f9f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.289810 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b4f1e75-6a30-4789-9b7f-85e92aed1581-horizon-secret-key\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.289897 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be8edd4-c691-4b23-903c-467ffafb5f9f-logs\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.290072 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.290294 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b4f1e75-6a30-4789-9b7f-85e92aed1581-logs\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.290369 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-852t7\" (UniqueName: \"kubernetes.io/projected/4be8edd4-c691-4b23-903c-467ffafb5f9f-kube-api-access-852t7\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.290432 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b4f1e75-6a30-4789-9b7f-85e92aed1581-config-data\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.290466 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b4f1e75-6a30-4789-9b7f-85e92aed1581-scripts\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.290536 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.290573 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.290606 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.290642 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhhsw\" (UniqueName: \"kubernetes.io/projected/6b4f1e75-6a30-4789-9b7f-85e92aed1581-kube-api-access-jhhsw\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.330611 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.350156 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c2b171-151e-42fe-b653-b9edfbd33766" path="/var/lib/kubelet/pods/85c2b171-151e-42fe-b653-b9edfbd33766/volumes" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.392315 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b4f1e75-6a30-4789-9b7f-85e92aed1581-logs\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.392721 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-852t7\" (UniqueName: \"kubernetes.io/projected/4be8edd4-c691-4b23-903c-467ffafb5f9f-kube-api-access-852t7\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.392758 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b4f1e75-6a30-4789-9b7f-85e92aed1581-config-data\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.392775 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b4f1e75-6a30-4789-9b7f-85e92aed1581-scripts\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.392795 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.392812 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.392830 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.392853 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhhsw\" (UniqueName: \"kubernetes.io/projected/6b4f1e75-6a30-4789-9b7f-85e92aed1581-kube-api-access-jhhsw\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.392883 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4be8edd4-c691-4b23-903c-467ffafb5f9f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.392935 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be8edd4-c691-4b23-903c-467ffafb5f9f-logs\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.392956 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b4f1e75-6a30-4789-9b7f-85e92aed1581-horizon-secret-key\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.392993 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.401521 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b4f1e75-6a30-4789-9b7f-85e92aed1581-logs\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.403814 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b4f1e75-6a30-4789-9b7f-85e92aed1581-config-data\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.404490 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b4f1e75-6a30-4789-9b7f-85e92aed1581-scripts\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.404878 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.409369 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be8edd4-c691-4b23-903c-467ffafb5f9f-logs\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.410917 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4be8edd4-c691-4b23-903c-467ffafb5f9f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.411096 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.413572 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.413678 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b4f1e75-6a30-4789-9b7f-85e92aed1581-horizon-secret-key\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.421392 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.427518 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-852t7\" (UniqueName: \"kubernetes.io/projected/4be8edd4-c691-4b23-903c-467ffafb5f9f-kube-api-access-852t7\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.442057 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhhsw\" (UniqueName: \"kubernetes.io/projected/6b4f1e75-6a30-4789-9b7f-85e92aed1581-kube-api-access-jhhsw\") pod \"horizon-845b48bb89-v6rjx\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.511106 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.614082 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-lnmfr"] Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.656960 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.664526 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-zv7fq"] Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.679073 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.698760 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" event={"ID":"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed","Type":"ContainerStarted","Data":"fa9b3eed70e557e7181b726a9774b9519a30a642aef59aa26eb7da539d3b2bf4"} Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.725964 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4s5nx" event={"ID":"9050bc07-2760-48bd-9005-7406de7a76ce","Type":"ContainerStarted","Data":"86e1f1432890eb125a20d8caa185e86c98fda054fe4d0053804ce9dd6bb0dcd2"} Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.726012 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4s5nx" event={"ID":"9050bc07-2760-48bd-9005-7406de7a76ce","Type":"ContainerStarted","Data":"60aed3759735afc039d1894b0614b191d0172e2bc17c965961c44efe4b652607"} Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.761367 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4s5nx" podStartSLOduration=2.761341727 podStartE2EDuration="2.761341727s" podCreationTimestamp="2026-02-23 18:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:01.749637216 +0000 UTC m=+997.140123016" watchObservedRunningTime="2026-02-23 18:50:01.761341727 +0000 UTC m=+997.151827527" Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.768667 4768 generic.go:334] "Generic (PLEG): container finished" podID="aecc462a-6fdb-41b1-b4ea-be7012b807cb" containerID="843657b5d56ebe8e3effa925dfd36b61dce76fdc699b7d1de42b9a9a71b22583" exitCode=0 Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.768714 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" event={"ID":"aecc462a-6fdb-41b1-b4ea-be7012b807cb","Type":"ContainerDied","Data":"843657b5d56ebe8e3effa925dfd36b61dce76fdc699b7d1de42b9a9a71b22583"} Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.842508 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-g998f"] Feb 23 18:50:01 crc kubenswrapper[4768]: I0223 18:50:01.852637 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:50:01 crc kubenswrapper[4768]: W0223 18:50:01.915076 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40891100_89e6_4bd1_9ea0_8707548ffee8.slice/crio-47ca1535d18dd1035ad658330871c79c9974ed20a1312713dd603e1175978f15 WatchSource:0}: Error finding container 47ca1535d18dd1035ad658330871c79c9974ed20a1312713dd603e1175978f15: Status 404 returned error can't find the container with id 47ca1535d18dd1035ad658330871c79c9974ed20a1312713dd603e1175978f15 Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.061683 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-bc449878f-7drht"] Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.072067 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n58l7"] Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.534804 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hcnm6"] Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.549025 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qbknd"] Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.558279 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.561918 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.606606 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bc449878f-7drht"] Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.641791 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-ovsdbserver-nb\") pod \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.642358 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-svc\") pod \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.642409 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-swift-storage-0\") pod \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.642524 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-ovsdbserver-sb\") pod \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.642547 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-config\") pod \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.643110 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsc69\" (UniqueName: \"kubernetes.io/projected/aecc462a-6fdb-41b1-b4ea-be7012b807cb-kube-api-access-tsc69\") pod \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.656636 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aecc462a-6fdb-41b1-b4ea-be7012b807cb-kube-api-access-tsc69" (OuterVolumeSpecName: "kube-api-access-tsc69") pod "aecc462a-6fdb-41b1-b4ea-be7012b807cb" (UID: "aecc462a-6fdb-41b1-b4ea-be7012b807cb"). InnerVolumeSpecName "kube-api-access-tsc69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.703114 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-config" (OuterVolumeSpecName: "config") pod "aecc462a-6fdb-41b1-b4ea-be7012b807cb" (UID: "aecc462a-6fdb-41b1-b4ea-be7012b807cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.723315 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67699f99c7-5rzsw"] Feb 23 18:50:02 crc kubenswrapper[4768]: E0223 18:50:02.724453 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecc462a-6fdb-41b1-b4ea-be7012b807cb" containerName="init" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.724471 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecc462a-6fdb-41b1-b4ea-be7012b807cb" containerName="init" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.724639 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="aecc462a-6fdb-41b1-b4ea-be7012b807cb" containerName="init" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.725470 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.743840 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67699f99c7-5rzsw"] Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.745607 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aecc462a-6fdb-41b1-b4ea-be7012b807cb" (UID: "aecc462a-6fdb-41b1-b4ea-be7012b807cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.751274 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-svc\") pod \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\" (UID: \"aecc462a-6fdb-41b1-b4ea-be7012b807cb\") " Feb 23 18:50:02 crc kubenswrapper[4768]: W0223 18:50:02.755713 4768 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/aecc462a-6fdb-41b1-b4ea-be7012b807cb/volumes/kubernetes.io~configmap/dns-svc Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.755743 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aecc462a-6fdb-41b1-b4ea-be7012b807cb" (UID: "aecc462a-6fdb-41b1-b4ea-be7012b807cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.763402 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.763438 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.763454 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsc69\" (UniqueName: \"kubernetes.io/projected/aecc462a-6fdb-41b1-b4ea-be7012b807cb-kube-api-access-tsc69\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.764331 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aecc462a-6fdb-41b1-b4ea-be7012b807cb" (UID: "aecc462a-6fdb-41b1-b4ea-be7012b807cb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.769644 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aecc462a-6fdb-41b1-b4ea-be7012b807cb" (UID: "aecc462a-6fdb-41b1-b4ea-be7012b807cb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.823522 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.850300 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aecc462a-6fdb-41b1-b4ea-be7012b807cb" (UID: "aecc462a-6fdb-41b1-b4ea-be7012b807cb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.850578 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" event={"ID":"21d677b5-cbc7-4501-addc-9e06c0bb8990","Type":"ContainerStarted","Data":"8eeab25e22bc469e9a8b3eb90672518be6ddf6333726526d8f0e37cb6e4ad28c"} Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.887264 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f393bd1-497e-4426-be4b-06f4c65f03f5-scripts\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.887545 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f393bd1-497e-4426-be4b-06f4c65f03f5-logs\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.887739 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f393bd1-497e-4426-be4b-06f4c65f03f5-horizon-secret-key\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.887767 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f393bd1-497e-4426-be4b-06f4c65f03f5-config-data\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.887891 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgxmm\" (UniqueName: \"kubernetes.io/projected/7f393bd1-497e-4426-be4b-06f4c65f03f5-kube-api-access-sgxmm\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.888097 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.888113 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.888141 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aecc462a-6fdb-41b1-b4ea-be7012b807cb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.890574 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zv7fq" event={"ID":"d689e8c1-2c72-4fe1-890c-ba586628dd4b","Type":"ContainerStarted","Data":"77b1530100971f706edd575769ad7ed83d77de4da6bc802073a70ccd05d5228c"} Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.906626 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n58l7" event={"ID":"cd2ba036-bbca-4b94-8f72-70e252e5a2b9","Type":"ContainerStarted","Data":"1816da3801c98d4284aae9d84f440a7c958f594e2a23002641aae391b4a56d22"} Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.930843 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g998f" event={"ID":"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5","Type":"ContainerStarted","Data":"64f28df03ba902db00b2ee197556ce4b38ff850a4d8f7b9785597c2fff956a9f"} Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.930882 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g998f" event={"ID":"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5","Type":"ContainerStarted","Data":"c52c579baba1e29879f644669a7530376a21128f7310387bd4cf33a89b51104f"} Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.953181 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40891100-89e6-4bd1-9ea0-8707548ffee8","Type":"ContainerStarted","Data":"47ca1535d18dd1035ad658330871c79c9974ed20a1312713dd603e1175978f15"} Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.964120 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.977906 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.979336 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-gls4j" event={"ID":"aecc462a-6fdb-41b1-b4ea-be7012b807cb","Type":"ContainerDied","Data":"b5c911bd0e09dc6e0e342a13389c4ef828190274ba7d2f9b7db623b763729c3e"} Feb 23 18:50:02 crc kubenswrapper[4768]: I0223 18:50:02.979414 4768 scope.go:117] "RemoveContainer" containerID="843657b5d56ebe8e3effa925dfd36b61dce76fdc699b7d1de42b9a9a71b22583" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.008037 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f393bd1-497e-4426-be4b-06f4c65f03f5-logs\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.008157 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f393bd1-497e-4426-be4b-06f4c65f03f5-horizon-secret-key\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.008188 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f393bd1-497e-4426-be4b-06f4c65f03f5-config-data\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.008235 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgxmm\" (UniqueName: \"kubernetes.io/projected/7f393bd1-497e-4426-be4b-06f4c65f03f5-kube-api-access-sgxmm\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.008309 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f393bd1-497e-4426-be4b-06f4c65f03f5-scripts\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.009273 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f393bd1-497e-4426-be4b-06f4c65f03f5-scripts\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.009511 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f393bd1-497e-4426-be4b-06f4c65f03f5-logs\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.022379 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f393bd1-497e-4426-be4b-06f4c65f03f5-config-data\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.023090 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.038550 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-g998f" podStartSLOduration=4.038523544 podStartE2EDuration="4.038523544s" podCreationTimestamp="2026-02-23 18:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:02.953680689 +0000 UTC m=+998.344166489" watchObservedRunningTime="2026-02-23 18:50:03.038523544 +0000 UTC m=+998.429009344" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.086665 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-845b48bb89-v6rjx"] Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.091912 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f393bd1-497e-4426-be4b-06f4c65f03f5-horizon-secret-key\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.098165 4768 generic.go:334] "Generic (PLEG): container finished" podID="ce9d7d11-2482-47b7-90d1-bed6f87cd1ed" containerID="8b3bdfa542b3bd438cb64fa00683b41bccf7257a767cfd2839ae75fc9b18f4b3" exitCode=0 Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.098270 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" event={"ID":"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed","Type":"ContainerDied","Data":"8b3bdfa542b3bd438cb64fa00683b41bccf7257a767cfd2839ae75fc9b18f4b3"} Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.138973 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgxmm\" (UniqueName: \"kubernetes.io/projected/7f393bd1-497e-4426-be4b-06f4c65f03f5-kube-api-access-sgxmm\") pod \"horizon-67699f99c7-5rzsw\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.151754 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hcnm6" event={"ID":"6f6df03b-46d7-4b9e-a9cd-949eca9bf718","Type":"ContainerStarted","Data":"e3bbe1fd3870d6223df96bac0da09bff37ba7b3c6696f72fc0eef38c7b17a176"} Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.185978 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bc449878f-7drht" event={"ID":"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a","Type":"ContainerStarted","Data":"7af1c4a52ee31edea59b62628834908d3380794171146ff7c61474a35e60fecd"} Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.395725 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-gls4j"] Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.399689 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.409464 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-gls4j"] Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.489394 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.844717 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.878174 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcgsn\" (UniqueName: \"kubernetes.io/projected/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-kube-api-access-jcgsn\") pod \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.878299 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-ovsdbserver-nb\") pod \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.878379 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-dns-swift-storage-0\") pod \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.878398 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-ovsdbserver-sb\") pod \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.878455 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-dns-svc\") pod \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.878522 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-config\") pod \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\" (UID: \"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed\") " Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.886235 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-kube-api-access-jcgsn" (OuterVolumeSpecName: "kube-api-access-jcgsn") pod "ce9d7d11-2482-47b7-90d1-bed6f87cd1ed" (UID: "ce9d7d11-2482-47b7-90d1-bed6f87cd1ed"). InnerVolumeSpecName "kube-api-access-jcgsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.919014 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ce9d7d11-2482-47b7-90d1-bed6f87cd1ed" (UID: "ce9d7d11-2482-47b7-90d1-bed6f87cd1ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.924008 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce9d7d11-2482-47b7-90d1-bed6f87cd1ed" (UID: "ce9d7d11-2482-47b7-90d1-bed6f87cd1ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.924121 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce9d7d11-2482-47b7-90d1-bed6f87cd1ed" (UID: "ce9d7d11-2482-47b7-90d1-bed6f87cd1ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.938267 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce9d7d11-2482-47b7-90d1-bed6f87cd1ed" (UID: "ce9d7d11-2482-47b7-90d1-bed6f87cd1ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.981928 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.982019 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.982034 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.982046 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcgsn\" (UniqueName: \"kubernetes.io/projected/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-kube-api-access-jcgsn\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.982057 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:03 crc kubenswrapper[4768]: I0223 18:50:03.993293 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-config" (OuterVolumeSpecName: "config") pod "ce9d7d11-2482-47b7-90d1-bed6f87cd1ed" (UID: "ce9d7d11-2482-47b7-90d1-bed6f87cd1ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:04 crc kubenswrapper[4768]: I0223 18:50:04.086643 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:04 crc kubenswrapper[4768]: I0223 18:50:04.114109 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67699f99c7-5rzsw"] Feb 23 18:50:04 crc kubenswrapper[4768]: I0223 18:50:04.211289 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4be8edd4-c691-4b23-903c-467ffafb5f9f","Type":"ContainerStarted","Data":"87158b81fc6f05ef6d2358501e93aa2323d5607afe1d63ce82f3f6b2c239ffde"} Feb 23 18:50:04 crc kubenswrapper[4768]: I0223 18:50:04.213868 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67699f99c7-5rzsw" event={"ID":"7f393bd1-497e-4426-be4b-06f4c65f03f5","Type":"ContainerStarted","Data":"7bcd805c84498830f593aeb64da766f66b0a41461d135877dc274dc480c91a1e"} Feb 23 18:50:04 crc kubenswrapper[4768]: I0223 18:50:04.216461 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-845b48bb89-v6rjx" event={"ID":"6b4f1e75-6a30-4789-9b7f-85e92aed1581","Type":"ContainerStarted","Data":"f288ad0c307b12b39fb34e061b6ce6641326600c98bcc27745b27db264eacce4"} Feb 23 18:50:04 crc kubenswrapper[4768]: I0223 18:50:04.225049 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ce15e04-156c-4ec7-a908-a36a712aac90","Type":"ContainerStarted","Data":"cdf3c3377e85723447eea6a32572f1df6b3fc8034627800bf925a79b742896df"} Feb 23 18:50:04 crc kubenswrapper[4768]: I0223 18:50:04.229145 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" event={"ID":"ce9d7d11-2482-47b7-90d1-bed6f87cd1ed","Type":"ContainerDied","Data":"fa9b3eed70e557e7181b726a9774b9519a30a642aef59aa26eb7da539d3b2bf4"} Feb 23 18:50:04 crc kubenswrapper[4768]: I0223 18:50:04.229194 4768 scope.go:117] "RemoveContainer" containerID="8b3bdfa542b3bd438cb64fa00683b41bccf7257a767cfd2839ae75fc9b18f4b3" Feb 23 18:50:04 crc kubenswrapper[4768]: I0223 18:50:04.229289 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-lnmfr" Feb 23 18:50:04 crc kubenswrapper[4768]: I0223 18:50:04.259447 4768 generic.go:334] "Generic (PLEG): container finished" podID="21d677b5-cbc7-4501-addc-9e06c0bb8990" containerID="abe14ab1439a652d45093c4365ac8e945e67ea74fec2d9ec4e46c5abcb408834" exitCode=0 Feb 23 18:50:04 crc kubenswrapper[4768]: I0223 18:50:04.259515 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" event={"ID":"21d677b5-cbc7-4501-addc-9e06c0bb8990","Type":"ContainerDied","Data":"abe14ab1439a652d45093c4365ac8e945e67ea74fec2d9ec4e46c5abcb408834"} Feb 23 18:50:04 crc kubenswrapper[4768]: I0223 18:50:04.388322 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-lnmfr"] Feb 23 18:50:04 crc kubenswrapper[4768]: I0223 18:50:04.427929 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-lnmfr"] Feb 23 18:50:05 crc kubenswrapper[4768]: I0223 18:50:05.266415 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:50:05 crc kubenswrapper[4768]: I0223 18:50:05.288353 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" event={"ID":"21d677b5-cbc7-4501-addc-9e06c0bb8990","Type":"ContainerStarted","Data":"ba052cf0e8dd4ed33bfc1a58960d20b7dfde90d61757b02ee78d2091e231ed48"} Feb 23 18:50:05 crc kubenswrapper[4768]: I0223 18:50:05.289632 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:05 crc kubenswrapper[4768]: I0223 18:50:05.295110 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4be8edd4-c691-4b23-903c-467ffafb5f9f","Type":"ContainerStarted","Data":"8d74a47b1e678856d81c27679f1372f3ed5e10aea8a20ffae66c0aa3c877bd87"} Feb 23 18:50:05 crc kubenswrapper[4768]: I0223 18:50:05.330700 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aecc462a-6fdb-41b1-b4ea-be7012b807cb" path="/var/lib/kubelet/pods/aecc462a-6fdb-41b1-b4ea-be7012b807cb/volumes" Feb 23 18:50:05 crc kubenswrapper[4768]: I0223 18:50:05.331260 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9d7d11-2482-47b7-90d1-bed6f87cd1ed" path="/var/lib/kubelet/pods/ce9d7d11-2482-47b7-90d1-bed6f87cd1ed/volumes" Feb 23 18:50:05 crc kubenswrapper[4768]: I0223 18:50:05.331814 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6g89c"] Feb 23 18:50:05 crc kubenswrapper[4768]: I0223 18:50:05.331851 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ce15e04-156c-4ec7-a908-a36a712aac90","Type":"ContainerStarted","Data":"7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f"} Feb 23 18:50:05 crc kubenswrapper[4768]: I0223 18:50:05.339629 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" podStartSLOduration=5.339610955 podStartE2EDuration="5.339610955s" podCreationTimestamp="2026-02-23 18:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:05.315716011 +0000 UTC m=+1000.706201801" watchObservedRunningTime="2026-02-23 18:50:05.339610955 +0000 UTC m=+1000.730096755" Feb 23 18:50:06 crc kubenswrapper[4768]: I0223 18:50:06.346870 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ce15e04-156c-4ec7-a908-a36a712aac90","Type":"ContainerStarted","Data":"7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab"} Feb 23 18:50:06 crc kubenswrapper[4768]: I0223 18:50:06.347359 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7ce15e04-156c-4ec7-a908-a36a712aac90" containerName="glance-log" containerID="cri-o://7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f" gracePeriod=30 Feb 23 18:50:06 crc kubenswrapper[4768]: I0223 18:50:06.347442 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7ce15e04-156c-4ec7-a908-a36a712aac90" containerName="glance-httpd" containerID="cri-o://7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab" gracePeriod=30 Feb 23 18:50:06 crc kubenswrapper[4768]: I0223 18:50:06.347571 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6g89c" podUID="8593dfa7-1021-4be4-8828-5cdbf51aef72" containerName="registry-server" containerID="cri-o://b5a4ba42e7caf6b5efb53f4ace09b6e74a8cdf8c765f84a06e20efe4add720d7" gracePeriod=2 Feb 23 18:50:06 crc kubenswrapper[4768]: I0223 18:50:06.376922 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.376898888 podStartE2EDuration="6.376898888s" podCreationTimestamp="2026-02-23 18:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:06.371651324 +0000 UTC m=+1001.762137134" watchObservedRunningTime="2026-02-23 18:50:06.376898888 +0000 UTC m=+1001.767384688" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.277534 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.286795 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.395896 4768 generic.go:334] "Generic (PLEG): container finished" podID="8593dfa7-1021-4be4-8828-5cdbf51aef72" containerID="b5a4ba42e7caf6b5efb53f4ace09b6e74a8cdf8c765f84a06e20efe4add720d7" exitCode=0 Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.395977 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6g89c" event={"ID":"8593dfa7-1021-4be4-8828-5cdbf51aef72","Type":"ContainerDied","Data":"b5a4ba42e7caf6b5efb53f4ace09b6e74a8cdf8c765f84a06e20efe4add720d7"} Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.404733 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6g89c" event={"ID":"8593dfa7-1021-4be4-8828-5cdbf51aef72","Type":"ContainerDied","Data":"e068c5c84ba86baac6e0873937fd06b2304a939a31b938581391418a9b16743a"} Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.404762 4768 scope.go:117] "RemoveContainer" containerID="b5a4ba42e7caf6b5efb53f4ace09b6e74a8cdf8c765f84a06e20efe4add720d7" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.396152 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6g89c" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.413362 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4be8edd4-c691-4b23-903c-467ffafb5f9f","Type":"ContainerStarted","Data":"1d01c0da6ce43c9d8fdfa819f1c0db5cafa8e24965ee87731ae0d8710a40b9c4"} Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.413785 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4be8edd4-c691-4b23-903c-467ffafb5f9f" containerName="glance-log" containerID="cri-o://8d74a47b1e678856d81c27679f1372f3ed5e10aea8a20ffae66c0aa3c877bd87" gracePeriod=30 Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.414434 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4be8edd4-c691-4b23-903c-467ffafb5f9f" containerName="glance-httpd" containerID="cri-o://1d01c0da6ce43c9d8fdfa819f1c0db5cafa8e24965ee87731ae0d8710a40b9c4" gracePeriod=30 Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.430729 4768 generic.go:334] "Generic (PLEG): container finished" podID="7ce15e04-156c-4ec7-a908-a36a712aac90" containerID="7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab" exitCode=0 Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.431758 4768 generic.go:334] "Generic (PLEG): container finished" podID="7ce15e04-156c-4ec7-a908-a36a712aac90" containerID="7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f" exitCode=143 Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.431705 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.431732 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ce15e04-156c-4ec7-a908-a36a712aac90","Type":"ContainerDied","Data":"7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab"} Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.432697 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ce15e04-156c-4ec7-a908-a36a712aac90","Type":"ContainerDied","Data":"7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f"} Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.432733 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7ce15e04-156c-4ec7-a908-a36a712aac90","Type":"ContainerDied","Data":"cdf3c3377e85723447eea6a32572f1df6b3fc8034627800bf925a79b742896df"} Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.432761 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-config-data\") pod \"7ce15e04-156c-4ec7-a908-a36a712aac90\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.432921 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7r4v\" (UniqueName: \"kubernetes.io/projected/8593dfa7-1021-4be4-8828-5cdbf51aef72-kube-api-access-p7r4v\") pod \"8593dfa7-1021-4be4-8828-5cdbf51aef72\" (UID: \"8593dfa7-1021-4be4-8828-5cdbf51aef72\") " Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.436787 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8593dfa7-1021-4be4-8828-5cdbf51aef72-utilities\") pod \"8593dfa7-1021-4be4-8828-5cdbf51aef72\" (UID: \"8593dfa7-1021-4be4-8828-5cdbf51aef72\") " Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.436845 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc28h\" (UniqueName: \"kubernetes.io/projected/7ce15e04-156c-4ec7-a908-a36a712aac90-kube-api-access-dc28h\") pod \"7ce15e04-156c-4ec7-a908-a36a712aac90\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.436936 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce15e04-156c-4ec7-a908-a36a712aac90-logs\") pod \"7ce15e04-156c-4ec7-a908-a36a712aac90\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.436980 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-scripts\") pod \"7ce15e04-156c-4ec7-a908-a36a712aac90\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.437033 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8593dfa7-1021-4be4-8828-5cdbf51aef72-catalog-content\") pod \"8593dfa7-1021-4be4-8828-5cdbf51aef72\" (UID: \"8593dfa7-1021-4be4-8828-5cdbf51aef72\") " Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.437068 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-combined-ca-bundle\") pod \"7ce15e04-156c-4ec7-a908-a36a712aac90\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.437096 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"7ce15e04-156c-4ec7-a908-a36a712aac90\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.437163 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ce15e04-156c-4ec7-a908-a36a712aac90-httpd-run\") pod \"7ce15e04-156c-4ec7-a908-a36a712aac90\" (UID: \"7ce15e04-156c-4ec7-a908-a36a712aac90\") " Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.439908 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce15e04-156c-4ec7-a908-a36a712aac90-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7ce15e04-156c-4ec7-a908-a36a712aac90" (UID: "7ce15e04-156c-4ec7-a908-a36a712aac90"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.442963 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.442925607 podStartE2EDuration="7.442925607s" podCreationTimestamp="2026-02-23 18:50:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:07.439673208 +0000 UTC m=+1002.830159008" watchObservedRunningTime="2026-02-23 18:50:07.442925607 +0000 UTC m=+1002.833411407" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.443553 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ce15e04-156c-4ec7-a908-a36a712aac90-logs" (OuterVolumeSpecName: "logs") pod "7ce15e04-156c-4ec7-a908-a36a712aac90" (UID: "7ce15e04-156c-4ec7-a908-a36a712aac90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.448500 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8593dfa7-1021-4be4-8828-5cdbf51aef72-utilities" (OuterVolumeSpecName: "utilities") pod "8593dfa7-1021-4be4-8828-5cdbf51aef72" (UID: "8593dfa7-1021-4be4-8828-5cdbf51aef72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.453123 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-scripts" (OuterVolumeSpecName: "scripts") pod "7ce15e04-156c-4ec7-a908-a36a712aac90" (UID: "7ce15e04-156c-4ec7-a908-a36a712aac90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.453265 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8593dfa7-1021-4be4-8828-5cdbf51aef72-kube-api-access-p7r4v" (OuterVolumeSpecName: "kube-api-access-p7r4v") pod "8593dfa7-1021-4be4-8828-5cdbf51aef72" (UID: "8593dfa7-1021-4be4-8828-5cdbf51aef72"). InnerVolumeSpecName "kube-api-access-p7r4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.455272 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "7ce15e04-156c-4ec7-a908-a36a712aac90" (UID: "7ce15e04-156c-4ec7-a908-a36a712aac90"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.468473 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ce15e04-156c-4ec7-a908-a36a712aac90-kube-api-access-dc28h" (OuterVolumeSpecName: "kube-api-access-dc28h") pod "7ce15e04-156c-4ec7-a908-a36a712aac90" (UID: "7ce15e04-156c-4ec7-a908-a36a712aac90"). InnerVolumeSpecName "kube-api-access-dc28h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.469891 4768 scope.go:117] "RemoveContainer" containerID="2192e23f1e71b2ab589ca3b38c8c01505deb60d0846045bf37ea57ef9ed7afad" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.475821 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8593dfa7-1021-4be4-8828-5cdbf51aef72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8593dfa7-1021-4be4-8828-5cdbf51aef72" (UID: "8593dfa7-1021-4be4-8828-5cdbf51aef72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.497043 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ce15e04-156c-4ec7-a908-a36a712aac90" (UID: "7ce15e04-156c-4ec7-a908-a36a712aac90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.520478 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-config-data" (OuterVolumeSpecName: "config-data") pod "7ce15e04-156c-4ec7-a908-a36a712aac90" (UID: "7ce15e04-156c-4ec7-a908-a36a712aac90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.539633 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7r4v\" (UniqueName: \"kubernetes.io/projected/8593dfa7-1021-4be4-8828-5cdbf51aef72-kube-api-access-p7r4v\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.539668 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8593dfa7-1021-4be4-8828-5cdbf51aef72-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.539678 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc28h\" (UniqueName: \"kubernetes.io/projected/7ce15e04-156c-4ec7-a908-a36a712aac90-kube-api-access-dc28h\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.539687 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ce15e04-156c-4ec7-a908-a36a712aac90-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.539697 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8593dfa7-1021-4be4-8828-5cdbf51aef72-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.539705 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.539713 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.539751 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.539759 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ce15e04-156c-4ec7-a908-a36a712aac90-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.539772 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce15e04-156c-4ec7-a908-a36a712aac90-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.586503 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.641363 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.764424 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6g89c"] Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.785497 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6g89c"] Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.802774 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.813289 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.828392 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:07 crc kubenswrapper[4768]: E0223 18:50:07.828873 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce15e04-156c-4ec7-a908-a36a712aac90" containerName="glance-httpd" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.828897 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce15e04-156c-4ec7-a908-a36a712aac90" containerName="glance-httpd" Feb 23 18:50:07 crc kubenswrapper[4768]: E0223 18:50:07.828921 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8593dfa7-1021-4be4-8828-5cdbf51aef72" containerName="extract-content" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.828928 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8593dfa7-1021-4be4-8828-5cdbf51aef72" containerName="extract-content" Feb 23 18:50:07 crc kubenswrapper[4768]: E0223 18:50:07.828944 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce9d7d11-2482-47b7-90d1-bed6f87cd1ed" containerName="init" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.828951 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce9d7d11-2482-47b7-90d1-bed6f87cd1ed" containerName="init" Feb 23 18:50:07 crc kubenswrapper[4768]: E0223 18:50:07.828965 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ce15e04-156c-4ec7-a908-a36a712aac90" containerName="glance-log" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.828970 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ce15e04-156c-4ec7-a908-a36a712aac90" containerName="glance-log" Feb 23 18:50:07 crc kubenswrapper[4768]: E0223 18:50:07.828985 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8593dfa7-1021-4be4-8828-5cdbf51aef72" containerName="extract-utilities" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.828991 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8593dfa7-1021-4be4-8828-5cdbf51aef72" containerName="extract-utilities" Feb 23 18:50:07 crc kubenswrapper[4768]: E0223 18:50:07.829009 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8593dfa7-1021-4be4-8828-5cdbf51aef72" containerName="registry-server" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.829015 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8593dfa7-1021-4be4-8828-5cdbf51aef72" containerName="registry-server" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.829177 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8593dfa7-1021-4be4-8828-5cdbf51aef72" containerName="registry-server" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.829189 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce9d7d11-2482-47b7-90d1-bed6f87cd1ed" containerName="init" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.829201 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce15e04-156c-4ec7-a908-a36a712aac90" containerName="glance-httpd" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.829211 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ce15e04-156c-4ec7-a908-a36a712aac90" containerName="glance-log" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.830297 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.834949 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.875163 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.976276 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.976325 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.976357 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68196292-495d-4c68-b2be-6a5be26281c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.976566 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.976770 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxv89\" (UniqueName: \"kubernetes.io/projected/68196292-495d-4c68-b2be-6a5be26281c0-kube-api-access-sxv89\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.976895 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68196292-495d-4c68-b2be-6a5be26281c0-logs\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:07 crc kubenswrapper[4768]: I0223 18:50:07.977011 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.078599 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68196292-495d-4c68-b2be-6a5be26281c0-logs\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.078693 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.078766 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.078793 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.078841 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68196292-495d-4c68-b2be-6a5be26281c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.078897 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.078950 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxv89\" (UniqueName: \"kubernetes.io/projected/68196292-495d-4c68-b2be-6a5be26281c0-kube-api-access-sxv89\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.079282 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68196292-495d-4c68-b2be-6a5be26281c0-logs\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.079730 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.079783 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68196292-495d-4c68-b2be-6a5be26281c0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.086771 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-config-data\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.087099 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-scripts\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.090714 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.101359 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxv89\" (UniqueName: \"kubernetes.io/projected/68196292-495d-4c68-b2be-6a5be26281c0-kube-api-access-sxv89\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.111971 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.195764 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.450465 4768 generic.go:334] "Generic (PLEG): container finished" podID="9050bc07-2760-48bd-9005-7406de7a76ce" containerID="86e1f1432890eb125a20d8caa185e86c98fda054fe4d0053804ce9dd6bb0dcd2" exitCode=0 Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.450618 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4s5nx" event={"ID":"9050bc07-2760-48bd-9005-7406de7a76ce","Type":"ContainerDied","Data":"86e1f1432890eb125a20d8caa185e86c98fda054fe4d0053804ce9dd6bb0dcd2"} Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.471966 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.485973 4768 generic.go:334] "Generic (PLEG): container finished" podID="4be8edd4-c691-4b23-903c-467ffafb5f9f" containerID="1d01c0da6ce43c9d8fdfa819f1c0db5cafa8e24965ee87731ae0d8710a40b9c4" exitCode=0 Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.486008 4768 generic.go:334] "Generic (PLEG): container finished" podID="4be8edd4-c691-4b23-903c-467ffafb5f9f" containerID="8d74a47b1e678856d81c27679f1372f3ed5e10aea8a20ffae66c0aa3c877bd87" exitCode=143 Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.486036 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4be8edd4-c691-4b23-903c-467ffafb5f9f","Type":"ContainerDied","Data":"1d01c0da6ce43c9d8fdfa819f1c0db5cafa8e24965ee87731ae0d8710a40b9c4"} Feb 23 18:50:08 crc kubenswrapper[4768]: I0223 18:50:08.486065 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4be8edd4-c691-4b23-903c-467ffafb5f9f","Type":"ContainerDied","Data":"8d74a47b1e678856d81c27679f1372f3ed5e10aea8a20ffae66c0aa3c877bd87"} Feb 23 18:50:09 crc kubenswrapper[4768]: I0223 18:50:09.329785 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ce15e04-156c-4ec7-a908-a36a712aac90" path="/var/lib/kubelet/pods/7ce15e04-156c-4ec7-a908-a36a712aac90/volumes" Feb 23 18:50:09 crc kubenswrapper[4768]: I0223 18:50:09.330819 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8593dfa7-1021-4be4-8828-5cdbf51aef72" path="/var/lib/kubelet/pods/8593dfa7-1021-4be4-8828-5cdbf51aef72/volumes" Feb 23 18:50:09 crc kubenswrapper[4768]: I0223 18:50:09.545767 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:50:09 crc kubenswrapper[4768]: I0223 18:50:09.545845 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:50:09 crc kubenswrapper[4768]: I0223 18:50:09.545999 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:50:09 crc kubenswrapper[4768]: I0223 18:50:09.547453 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"786bab7731b00b23523b13fa7e10ac65a60b043dfe0ad9d117ecf340ff5d7aa0"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:50:09 crc kubenswrapper[4768]: I0223 18:50:09.547532 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://786bab7731b00b23523b13fa7e10ac65a60b043dfe0ad9d117ecf340ff5d7aa0" gracePeriod=600 Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.344019 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-845b48bb89-v6rjx"] Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.357983 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-84699c9d66-ghjfn"] Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.365490 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.380385 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84699c9d66-ghjfn"] Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.385918 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.390463 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67699f99c7-5rzsw"] Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.401398 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-58cc9986b4-t7tcs"] Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.403964 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.406016 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58cc9986b4-t7tcs"] Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.508660 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="786bab7731b00b23523b13fa7e10ac65a60b043dfe0ad9d117ecf340ff5d7aa0" exitCode=0 Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.508974 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"786bab7731b00b23523b13fa7e10ac65a60b043dfe0ad9d117ecf340ff5d7aa0"} Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552499 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fe017d9-f16b-465c-97a0-ebe4466006f0-scripts\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552547 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-combined-ca-bundle\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552573 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fe017d9-f16b-465c-97a0-ebe4466006f0-logs\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552600 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c46ebaa2-3910-4025-8420-71eb83b3a909-scripts\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552624 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fe017d9-f16b-465c-97a0-ebe4466006f0-combined-ca-bundle\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552647 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qtxb\" (UniqueName: \"kubernetes.io/projected/c46ebaa2-3910-4025-8420-71eb83b3a909-kube-api-access-7qtxb\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552689 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c46ebaa2-3910-4025-8420-71eb83b3a909-logs\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552704 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-horizon-secret-key\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552718 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fe017d9-f16b-465c-97a0-ebe4466006f0-horizon-tls-certs\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552746 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5fe017d9-f16b-465c-97a0-ebe4466006f0-horizon-secret-key\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552774 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fe017d9-f16b-465c-97a0-ebe4466006f0-config-data\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552792 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkbpd\" (UniqueName: \"kubernetes.io/projected/5fe017d9-f16b-465c-97a0-ebe4466006f0-kube-api-access-nkbpd\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552834 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-horizon-tls-certs\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.552857 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c46ebaa2-3910-4025-8420-71eb83b3a909-config-data\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655497 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fe017d9-f16b-465c-97a0-ebe4466006f0-logs\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655571 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c46ebaa2-3910-4025-8420-71eb83b3a909-scripts\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655608 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fe017d9-f16b-465c-97a0-ebe4466006f0-combined-ca-bundle\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655630 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qtxb\" (UniqueName: \"kubernetes.io/projected/c46ebaa2-3910-4025-8420-71eb83b3a909-kube-api-access-7qtxb\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655684 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c46ebaa2-3910-4025-8420-71eb83b3a909-logs\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655724 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-horizon-secret-key\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655751 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fe017d9-f16b-465c-97a0-ebe4466006f0-horizon-tls-certs\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655787 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5fe017d9-f16b-465c-97a0-ebe4466006f0-horizon-secret-key\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655823 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fe017d9-f16b-465c-97a0-ebe4466006f0-config-data\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655841 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkbpd\" (UniqueName: \"kubernetes.io/projected/5fe017d9-f16b-465c-97a0-ebe4466006f0-kube-api-access-nkbpd\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655892 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-horizon-tls-certs\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655922 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c46ebaa2-3910-4025-8420-71eb83b3a909-config-data\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655958 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-combined-ca-bundle\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.655975 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fe017d9-f16b-465c-97a0-ebe4466006f0-scripts\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.656845 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fe017d9-f16b-465c-97a0-ebe4466006f0-scripts\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.658494 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c46ebaa2-3910-4025-8420-71eb83b3a909-logs\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.660676 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c46ebaa2-3910-4025-8420-71eb83b3a909-config-data\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.660952 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fe017d9-f16b-465c-97a0-ebe4466006f0-logs\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.661477 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c46ebaa2-3910-4025-8420-71eb83b3a909-scripts\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.663938 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-combined-ca-bundle\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.664885 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fe017d9-f16b-465c-97a0-ebe4466006f0-config-data\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.664982 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-horizon-tls-certs\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.665537 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fe017d9-f16b-465c-97a0-ebe4466006f0-combined-ca-bundle\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.665819 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5fe017d9-f16b-465c-97a0-ebe4466006f0-horizon-secret-key\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.668379 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fe017d9-f16b-465c-97a0-ebe4466006f0-horizon-tls-certs\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.672815 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-horizon-secret-key\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.691755 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qtxb\" (UniqueName: \"kubernetes.io/projected/c46ebaa2-3910-4025-8420-71eb83b3a909-kube-api-access-7qtxb\") pod \"horizon-84699c9d66-ghjfn\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.694911 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkbpd\" (UniqueName: \"kubernetes.io/projected/5fe017d9-f16b-465c-97a0-ebe4466006f0-kube-api-access-nkbpd\") pod \"horizon-58cc9986b4-t7tcs\" (UID: \"5fe017d9-f16b-465c-97a0-ebe4466006f0\") " pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.718057 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:10 crc kubenswrapper[4768]: I0223 18:50:10.750441 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.261880 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.286194 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-combined-ca-bundle\") pod \"9050bc07-2760-48bd-9005-7406de7a76ce\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.286311 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-scripts\") pod \"9050bc07-2760-48bd-9005-7406de7a76ce\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.286342 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwdcr\" (UniqueName: \"kubernetes.io/projected/9050bc07-2760-48bd-9005-7406de7a76ce-kube-api-access-qwdcr\") pod \"9050bc07-2760-48bd-9005-7406de7a76ce\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.286501 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-credential-keys\") pod \"9050bc07-2760-48bd-9005-7406de7a76ce\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.286613 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-fernet-keys\") pod \"9050bc07-2760-48bd-9005-7406de7a76ce\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.286663 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-config-data\") pod \"9050bc07-2760-48bd-9005-7406de7a76ce\" (UID: \"9050bc07-2760-48bd-9005-7406de7a76ce\") " Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.295467 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9050bc07-2760-48bd-9005-7406de7a76ce-kube-api-access-qwdcr" (OuterVolumeSpecName: "kube-api-access-qwdcr") pod "9050bc07-2760-48bd-9005-7406de7a76ce" (UID: "9050bc07-2760-48bd-9005-7406de7a76ce"). InnerVolumeSpecName "kube-api-access-qwdcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.295619 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9050bc07-2760-48bd-9005-7406de7a76ce" (UID: "9050bc07-2760-48bd-9005-7406de7a76ce"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.296363 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-scripts" (OuterVolumeSpecName: "scripts") pod "9050bc07-2760-48bd-9005-7406de7a76ce" (UID: "9050bc07-2760-48bd-9005-7406de7a76ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.306068 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "9050bc07-2760-48bd-9005-7406de7a76ce" (UID: "9050bc07-2760-48bd-9005-7406de7a76ce"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.343389 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-config-data" (OuterVolumeSpecName: "config-data") pod "9050bc07-2760-48bd-9005-7406de7a76ce" (UID: "9050bc07-2760-48bd-9005-7406de7a76ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.359278 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.366369 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9050bc07-2760-48bd-9005-7406de7a76ce" (UID: "9050bc07-2760-48bd-9005-7406de7a76ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.392664 4768 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.392704 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.392720 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.392755 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.392767 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwdcr\" (UniqueName: \"kubernetes.io/projected/9050bc07-2760-48bd-9005-7406de7a76ce-kube-api-access-qwdcr\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.392779 4768 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9050bc07-2760-48bd-9005-7406de7a76ce-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.449810 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mclnm"] Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.450023 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-mclnm" podUID="5823e392-a97a-4f29-a8a4-3dbfeb426417" containerName="dnsmasq-dns" containerID="cri-o://769f09a4884901dcd170f703ef0fd99d2cddcb08648f93f84af02f99099c5c65" gracePeriod=10 Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.539811 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4s5nx" event={"ID":"9050bc07-2760-48bd-9005-7406de7a76ce","Type":"ContainerDied","Data":"60aed3759735afc039d1894b0614b191d0172e2bc17c965961c44efe4b652607"} Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.539858 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60aed3759735afc039d1894b0614b191d0172e2bc17c965961c44efe4b652607" Feb 23 18:50:11 crc kubenswrapper[4768]: I0223 18:50:11.539927 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4s5nx" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.428688 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4s5nx"] Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.435138 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4s5nx"] Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.517108 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vnlkg"] Feb 23 18:50:12 crc kubenswrapper[4768]: E0223 18:50:12.517477 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9050bc07-2760-48bd-9005-7406de7a76ce" containerName="keystone-bootstrap" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.517491 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9050bc07-2760-48bd-9005-7406de7a76ce" containerName="keystone-bootstrap" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.517642 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9050bc07-2760-48bd-9005-7406de7a76ce" containerName="keystone-bootstrap" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.518154 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.521674 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.522337 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ftws5" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.522645 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.522822 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.523229 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.527919 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vnlkg"] Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.590954 4768 generic.go:334] "Generic (PLEG): container finished" podID="5823e392-a97a-4f29-a8a4-3dbfeb426417" containerID="769f09a4884901dcd170f703ef0fd99d2cddcb08648f93f84af02f99099c5c65" exitCode=0 Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.591039 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mclnm" event={"ID":"5823e392-a97a-4f29-a8a4-3dbfeb426417","Type":"ContainerDied","Data":"769f09a4884901dcd170f703ef0fd99d2cddcb08648f93f84af02f99099c5c65"} Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.659586 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-config-data\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.659668 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-fernet-keys\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.660636 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-scripts\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.660700 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-combined-ca-bundle\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.660766 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t58xn\" (UniqueName: \"kubernetes.io/projected/6f5f03e9-0a62-4567-93d2-5abbb7b89219-kube-api-access-t58xn\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.660827 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-credential-keys\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.762539 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-config-data\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.762621 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-fernet-keys\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.762650 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-scripts\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.762681 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-combined-ca-bundle\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.762730 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t58xn\" (UniqueName: \"kubernetes.io/projected/6f5f03e9-0a62-4567-93d2-5abbb7b89219-kube-api-access-t58xn\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.762778 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-credential-keys\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.770527 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-scripts\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.770917 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-credential-keys\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.771815 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-config-data\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.771830 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-fernet-keys\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.773305 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-combined-ca-bundle\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.790743 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t58xn\" (UniqueName: \"kubernetes.io/projected/6f5f03e9-0a62-4567-93d2-5abbb7b89219-kube-api-access-t58xn\") pod \"keystone-bootstrap-vnlkg\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:12 crc kubenswrapper[4768]: I0223 18:50:12.874887 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:13 crc kubenswrapper[4768]: I0223 18:50:13.319131 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9050bc07-2760-48bd-9005-7406de7a76ce" path="/var/lib/kubelet/pods/9050bc07-2760-48bd-9005-7406de7a76ce/volumes" Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.049881 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-mclnm" podUID="5823e392-a97a-4f29-a8a4-3dbfeb426417" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: connect: connection refused" Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.267348 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kv44j"] Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.272190 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.289107 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kv44j"] Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.299522 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-catalog-content\") pod \"community-operators-kv44j\" (UID: \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\") " pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.299636 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-utilities\") pod \"community-operators-kv44j\" (UID: \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\") " pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.299732 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk9c4\" (UniqueName: \"kubernetes.io/projected/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-kube-api-access-wk9c4\") pod \"community-operators-kv44j\" (UID: \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\") " pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.400742 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-utilities\") pod \"community-operators-kv44j\" (UID: \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\") " pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.400840 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk9c4\" (UniqueName: \"kubernetes.io/projected/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-kube-api-access-wk9c4\") pod \"community-operators-kv44j\" (UID: \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\") " pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.400921 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-catalog-content\") pod \"community-operators-kv44j\" (UID: \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\") " pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.401362 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-utilities\") pod \"community-operators-kv44j\" (UID: \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\") " pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.401560 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-catalog-content\") pod \"community-operators-kv44j\" (UID: \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\") " pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.424826 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk9c4\" (UniqueName: \"kubernetes.io/projected/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-kube-api-access-wk9c4\") pod \"community-operators-kv44j\" (UID: \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\") " pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:50:14 crc kubenswrapper[4768]: I0223 18:50:14.606461 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:50:18 crc kubenswrapper[4768]: E0223 18:50:18.185440 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 23 18:50:18 crc kubenswrapper[4768]: E0223 18:50:18.186317 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67bhf5hf9h68dhfch5ffhc4h679h5f5h68bh566h644h6ch56bh5b6h5d8h567h695h86h694h697h95h5d6h8chb5h57ch68h67dh555hb7h549h68fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hj6rd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(40891100-89e6-4bd1-9ea0-8707548ffee8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 18:50:19 crc kubenswrapper[4768]: I0223 18:50:19.053271 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-mclnm" podUID="5823e392-a97a-4f29-a8a4-3dbfeb426417" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: connect: connection refused" Feb 23 18:50:22 crc kubenswrapper[4768]: E0223 18:50:22.329610 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 23 18:50:22 crc kubenswrapper[4768]: E0223 18:50:22.330590 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lkhn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-n58l7_openstack(cd2ba036-bbca-4b94-8f72-70e252e5a2b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 18:50:22 crc kubenswrapper[4768]: E0223 18:50:22.332392 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-n58l7" podUID="cd2ba036-bbca-4b94-8f72-70e252e5a2b9" Feb 23 18:50:22 crc kubenswrapper[4768]: I0223 18:50:22.336617 4768 scope.go:117] "RemoveContainer" containerID="ba3253d794a64496ed2c8425703430c50e3657894edaf364708e81814ab88ca9" Feb 23 18:50:22 crc kubenswrapper[4768]: E0223 18:50:22.732469 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-n58l7" podUID="cd2ba036-bbca-4b94-8f72-70e252e5a2b9" Feb 23 18:50:23 crc kubenswrapper[4768]: I0223 18:50:23.013976 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-58cc9986b4-t7tcs"] Feb 23 18:50:24 crc kubenswrapper[4768]: I0223 18:50:24.749715 4768 generic.go:334] "Generic (PLEG): container finished" podID="827f35c4-f9c8-4dea-8da7-a1ca6296b0f5" containerID="64f28df03ba902db00b2ee197556ce4b38ff850a4d8f7b9785597c2fff956a9f" exitCode=0 Feb 23 18:50:24 crc kubenswrapper[4768]: I0223 18:50:24.749785 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g998f" event={"ID":"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5","Type":"ContainerDied","Data":"64f28df03ba902db00b2ee197556ce4b38ff850a4d8f7b9785597c2fff956a9f"} Feb 23 18:50:29 crc kubenswrapper[4768]: I0223 18:50:29.050292 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-mclnm" podUID="5823e392-a97a-4f29-a8a4-3dbfeb426417" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: i/o timeout" Feb 23 18:50:29 crc kubenswrapper[4768]: I0223 18:50:29.051072 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:50:30 crc kubenswrapper[4768]: E0223 18:50:30.936601 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 23 18:50:30 crc kubenswrapper[4768]: E0223 18:50:30.938001 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mb6q9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-hcnm6_openstack(6f6df03b-46d7-4b9e-a9cd-949eca9bf718): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 18:50:30 crc kubenswrapper[4768]: E0223 18:50:30.940384 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-hcnm6" podUID="6f6df03b-46d7-4b9e-a9cd-949eca9bf718" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.078532 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.088080 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.092872 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g998f" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.155735 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6d8j\" (UniqueName: \"kubernetes.io/projected/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-kube-api-access-r6d8j\") pod \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\" (UID: \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.155833 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4be8edd4-c691-4b23-903c-467ffafb5f9f-httpd-run\") pod \"4be8edd4-c691-4b23-903c-467ffafb5f9f\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.155917 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-852t7\" (UniqueName: \"kubernetes.io/projected/4be8edd4-c691-4b23-903c-467ffafb5f9f-kube-api-access-852t7\") pod \"4be8edd4-c691-4b23-903c-467ffafb5f9f\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.155956 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-ovsdbserver-sb\") pod \"5823e392-a97a-4f29-a8a4-3dbfeb426417\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.156031 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v77f7\" (UniqueName: \"kubernetes.io/projected/5823e392-a97a-4f29-a8a4-3dbfeb426417-kube-api-access-v77f7\") pod \"5823e392-a97a-4f29-a8a4-3dbfeb426417\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.156054 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-dns-svc\") pod \"5823e392-a97a-4f29-a8a4-3dbfeb426417\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.156103 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-scripts\") pod \"4be8edd4-c691-4b23-903c-467ffafb5f9f\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.156146 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be8edd4-c691-4b23-903c-467ffafb5f9f-logs\") pod \"4be8edd4-c691-4b23-903c-467ffafb5f9f\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.156206 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-config-data\") pod \"4be8edd4-c691-4b23-903c-467ffafb5f9f\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.156282 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-ovsdbserver-nb\") pod \"5823e392-a97a-4f29-a8a4-3dbfeb426417\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.156317 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"4be8edd4-c691-4b23-903c-467ffafb5f9f\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.156353 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-config\") pod \"5823e392-a97a-4f29-a8a4-3dbfeb426417\" (UID: \"5823e392-a97a-4f29-a8a4-3dbfeb426417\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.156376 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-combined-ca-bundle\") pod \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\" (UID: \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.156405 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-combined-ca-bundle\") pod \"4be8edd4-c691-4b23-903c-467ffafb5f9f\" (UID: \"4be8edd4-c691-4b23-903c-467ffafb5f9f\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.156479 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-config\") pod \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\" (UID: \"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5\") " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.157364 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4be8edd4-c691-4b23-903c-467ffafb5f9f-logs" (OuterVolumeSpecName: "logs") pod "4be8edd4-c691-4b23-903c-467ffafb5f9f" (UID: "4be8edd4-c691-4b23-903c-467ffafb5f9f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.157753 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4be8edd4-c691-4b23-903c-467ffafb5f9f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4be8edd4-c691-4b23-903c-467ffafb5f9f" (UID: "4be8edd4-c691-4b23-903c-467ffafb5f9f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.171534 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-kube-api-access-r6d8j" (OuterVolumeSpecName: "kube-api-access-r6d8j") pod "827f35c4-f9c8-4dea-8da7-a1ca6296b0f5" (UID: "827f35c4-f9c8-4dea-8da7-a1ca6296b0f5"). InnerVolumeSpecName "kube-api-access-r6d8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.171536 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-scripts" (OuterVolumeSpecName: "scripts") pod "4be8edd4-c691-4b23-903c-467ffafb5f9f" (UID: "4be8edd4-c691-4b23-903c-467ffafb5f9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.171623 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "4be8edd4-c691-4b23-903c-467ffafb5f9f" (UID: "4be8edd4-c691-4b23-903c-467ffafb5f9f"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.171806 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4be8edd4-c691-4b23-903c-467ffafb5f9f-kube-api-access-852t7" (OuterVolumeSpecName: "kube-api-access-852t7") pod "4be8edd4-c691-4b23-903c-467ffafb5f9f" (UID: "4be8edd4-c691-4b23-903c-467ffafb5f9f"). InnerVolumeSpecName "kube-api-access-852t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.184602 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5823e392-a97a-4f29-a8a4-3dbfeb426417-kube-api-access-v77f7" (OuterVolumeSpecName: "kube-api-access-v77f7") pod "5823e392-a97a-4f29-a8a4-3dbfeb426417" (UID: "5823e392-a97a-4f29-a8a4-3dbfeb426417"). InnerVolumeSpecName "kube-api-access-v77f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.200770 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-config" (OuterVolumeSpecName: "config") pod "827f35c4-f9c8-4dea-8da7-a1ca6296b0f5" (UID: "827f35c4-f9c8-4dea-8da7-a1ca6296b0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.224057 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4be8edd4-c691-4b23-903c-467ffafb5f9f" (UID: "4be8edd4-c691-4b23-903c-467ffafb5f9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.239483 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "827f35c4-f9c8-4dea-8da7-a1ca6296b0f5" (UID: "827f35c4-f9c8-4dea-8da7-a1ca6296b0f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.247445 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-config-data" (OuterVolumeSpecName: "config-data") pod "4be8edd4-c691-4b23-903c-467ffafb5f9f" (UID: "4be8edd4-c691-4b23-903c-467ffafb5f9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.247827 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5823e392-a97a-4f29-a8a4-3dbfeb426417" (UID: "5823e392-a97a-4f29-a8a4-3dbfeb426417"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.249356 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5823e392-a97a-4f29-a8a4-3dbfeb426417" (UID: "5823e392-a97a-4f29-a8a4-3dbfeb426417"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.254555 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5823e392-a97a-4f29-a8a4-3dbfeb426417" (UID: "5823e392-a97a-4f29-a8a4-3dbfeb426417"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259155 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259176 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4be8edd4-c691-4b23-903c-467ffafb5f9f-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259188 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259199 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259227 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259237 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259259 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4be8edd4-c691-4b23-903c-467ffafb5f9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259268 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259279 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6d8j\" (UniqueName: \"kubernetes.io/projected/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5-kube-api-access-r6d8j\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259289 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4be8edd4-c691-4b23-903c-467ffafb5f9f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259298 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-852t7\" (UniqueName: \"kubernetes.io/projected/4be8edd4-c691-4b23-903c-467ffafb5f9f-kube-api-access-852t7\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259306 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259314 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.259322 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v77f7\" (UniqueName: \"kubernetes.io/projected/5823e392-a97a-4f29-a8a4-3dbfeb426417-kube-api-access-v77f7\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.271022 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-config" (OuterVolumeSpecName: "config") pod "5823e392-a97a-4f29-a8a4-3dbfeb426417" (UID: "5823e392-a97a-4f29-a8a4-3dbfeb426417"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.286375 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.362617 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.363068 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5823e392-a97a-4f29-a8a4-3dbfeb426417-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.481530 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-84699c9d66-ghjfn"] Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.820432 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-g998f" event={"ID":"827f35c4-f9c8-4dea-8da7-a1ca6296b0f5","Type":"ContainerDied","Data":"c52c579baba1e29879f644669a7530376a21128f7310387bd4cf33a89b51104f"} Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.820457 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-g998f" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.820473 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c52c579baba1e29879f644669a7530376a21128f7310387bd4cf33a89b51104f" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.822980 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-mclnm" event={"ID":"5823e392-a97a-4f29-a8a4-3dbfeb426417","Type":"ContainerDied","Data":"de2535d5dda74f461f68d39908f6fe71eca86cadef7aef3a2f3a656bc21f44d0"} Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.823046 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-mclnm" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.826304 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4be8edd4-c691-4b23-903c-467ffafb5f9f","Type":"ContainerDied","Data":"87158b81fc6f05ef6d2358501e93aa2323d5607afe1d63ce82f3f6b2c239ffde"} Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.826395 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.829231 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58cc9986b4-t7tcs" event={"ID":"5fe017d9-f16b-465c-97a0-ebe4466006f0","Type":"ContainerStarted","Data":"095ea8e0a8584a2f151eedda2131191c5a9f19f1e10b567df7685044684a939d"} Feb 23 18:50:31 crc kubenswrapper[4768]: E0223 18:50:31.831603 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-hcnm6" podUID="6f6df03b-46d7-4b9e-a9cd-949eca9bf718" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.875851 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mclnm"] Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.882772 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-mclnm"] Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.895970 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.917478 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.917540 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:50:31 crc kubenswrapper[4768]: E0223 18:50:31.917858 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5823e392-a97a-4f29-a8a4-3dbfeb426417" containerName="init" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.917876 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5823e392-a97a-4f29-a8a4-3dbfeb426417" containerName="init" Feb 23 18:50:31 crc kubenswrapper[4768]: E0223 18:50:31.917885 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be8edd4-c691-4b23-903c-467ffafb5f9f" containerName="glance-httpd" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.917891 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be8edd4-c691-4b23-903c-467ffafb5f9f" containerName="glance-httpd" Feb 23 18:50:31 crc kubenswrapper[4768]: E0223 18:50:31.917901 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5823e392-a97a-4f29-a8a4-3dbfeb426417" containerName="dnsmasq-dns" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.917910 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5823e392-a97a-4f29-a8a4-3dbfeb426417" containerName="dnsmasq-dns" Feb 23 18:50:31 crc kubenswrapper[4768]: E0223 18:50:31.917927 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="827f35c4-f9c8-4dea-8da7-a1ca6296b0f5" containerName="neutron-db-sync" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.917933 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="827f35c4-f9c8-4dea-8da7-a1ca6296b0f5" containerName="neutron-db-sync" Feb 23 18:50:31 crc kubenswrapper[4768]: E0223 18:50:31.917942 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4be8edd4-c691-4b23-903c-467ffafb5f9f" containerName="glance-log" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.917947 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4be8edd4-c691-4b23-903c-467ffafb5f9f" containerName="glance-log" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.918106 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be8edd4-c691-4b23-903c-467ffafb5f9f" containerName="glance-httpd" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.918133 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4be8edd4-c691-4b23-903c-467ffafb5f9f" containerName="glance-log" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.918143 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="827f35c4-f9c8-4dea-8da7-a1ca6296b0f5" containerName="neutron-db-sync" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.918153 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5823e392-a97a-4f29-a8a4-3dbfeb426417" containerName="dnsmasq-dns" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.918996 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.923818 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.926835 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.936280 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.974122 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.974194 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.974272 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.974352 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b88fx\" (UniqueName: \"kubernetes.io/projected/a00c3dcd-826d-486b-9879-6e45d61a9907-kube-api-access-b88fx\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.974413 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.974457 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a00c3dcd-826d-486b-9879-6e45d61a9907-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.974652 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:31 crc kubenswrapper[4768]: I0223 18:50:31.974778 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a00c3dcd-826d-486b-9879-6e45d61a9907-logs\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.076139 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.076220 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a00c3dcd-826d-486b-9879-6e45d61a9907-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.076300 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.076343 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a00c3dcd-826d-486b-9879-6e45d61a9907-logs\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.076390 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.076428 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.076459 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.076602 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b88fx\" (UniqueName: \"kubernetes.io/projected/a00c3dcd-826d-486b-9879-6e45d61a9907-kube-api-access-b88fx\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.076624 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.076917 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a00c3dcd-826d-486b-9879-6e45d61a9907-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.077319 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a00c3dcd-826d-486b-9879-6e45d61a9907-logs\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.081551 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.087081 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.087269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.087917 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.096622 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b88fx\" (UniqueName: \"kubernetes.io/projected/a00c3dcd-826d-486b-9879-6e45d61a9907-kube-api-access-b88fx\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.109320 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.249207 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.436741 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-l2tjj"] Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.438264 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.468693 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-l2tjj"] Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.580736 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86445c674d-k7fnl"] Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.582772 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.587739 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.588707 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.588859 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gl94w" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.588975 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.593592 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86445c674d-k7fnl"] Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.611609 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-589gh\" (UniqueName: \"kubernetes.io/projected/2534372c-ef07-45f2-917b-912873de873d-kube-api-access-589gh\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.611681 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.611723 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-config\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.611750 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.611800 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.611819 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.713730 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-combined-ca-bundle\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.713794 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-589gh\" (UniqueName: \"kubernetes.io/projected/2534372c-ef07-45f2-917b-912873de873d-kube-api-access-589gh\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.713826 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-ovndb-tls-certs\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.713954 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.714002 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-config\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.714028 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.714054 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-config\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.714079 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dwqc\" (UniqueName: \"kubernetes.io/projected/e9d24a80-bd92-4752-8786-147975b15301-kube-api-access-2dwqc\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.714110 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-httpd-config\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.714130 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.714153 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.714870 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.714983 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-config\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.715511 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.715616 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.720854 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.742115 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-589gh\" (UniqueName: \"kubernetes.io/projected/2534372c-ef07-45f2-917b-912873de873d-kube-api-access-589gh\") pod \"dnsmasq-dns-55f844cf75-l2tjj\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.748353 4768 scope.go:117] "RemoveContainer" containerID="b5a4ba42e7caf6b5efb53f4ace09b6e74a8cdf8c765f84a06e20efe4add720d7" Feb 23 18:50:32 crc kubenswrapper[4768]: E0223 18:50:32.750306 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5a4ba42e7caf6b5efb53f4ace09b6e74a8cdf8c765f84a06e20efe4add720d7\": container with ID starting with b5a4ba42e7caf6b5efb53f4ace09b6e74a8cdf8c765f84a06e20efe4add720d7 not found: ID does not exist" containerID="b5a4ba42e7caf6b5efb53f4ace09b6e74a8cdf8c765f84a06e20efe4add720d7" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.750339 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5a4ba42e7caf6b5efb53f4ace09b6e74a8cdf8c765f84a06e20efe4add720d7"} err="failed to get container status \"b5a4ba42e7caf6b5efb53f4ace09b6e74a8cdf8c765f84a06e20efe4add720d7\": rpc error: code = NotFound desc = could not find container \"b5a4ba42e7caf6b5efb53f4ace09b6e74a8cdf8c765f84a06e20efe4add720d7\": container with ID starting with b5a4ba42e7caf6b5efb53f4ace09b6e74a8cdf8c765f84a06e20efe4add720d7 not found: ID does not exist" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.750359 4768 scope.go:117] "RemoveContainer" containerID="2192e23f1e71b2ab589ca3b38c8c01505deb60d0846045bf37ea57ef9ed7afad" Feb 23 18:50:32 crc kubenswrapper[4768]: E0223 18:50:32.750768 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2192e23f1e71b2ab589ca3b38c8c01505deb60d0846045bf37ea57ef9ed7afad\": container with ID starting with 2192e23f1e71b2ab589ca3b38c8c01505deb60d0846045bf37ea57ef9ed7afad not found: ID does not exist" containerID="2192e23f1e71b2ab589ca3b38c8c01505deb60d0846045bf37ea57ef9ed7afad" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.750814 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2192e23f1e71b2ab589ca3b38c8c01505deb60d0846045bf37ea57ef9ed7afad"} err="failed to get container status \"2192e23f1e71b2ab589ca3b38c8c01505deb60d0846045bf37ea57ef9ed7afad\": rpc error: code = NotFound desc = could not find container \"2192e23f1e71b2ab589ca3b38c8c01505deb60d0846045bf37ea57ef9ed7afad\": container with ID starting with 2192e23f1e71b2ab589ca3b38c8c01505deb60d0846045bf37ea57ef9ed7afad not found: ID does not exist" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.750852 4768 scope.go:117] "RemoveContainer" containerID="ba3253d794a64496ed2c8425703430c50e3657894edaf364708e81814ab88ca9" Feb 23 18:50:32 crc kubenswrapper[4768]: E0223 18:50:32.751525 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba3253d794a64496ed2c8425703430c50e3657894edaf364708e81814ab88ca9\": container with ID starting with ba3253d794a64496ed2c8425703430c50e3657894edaf364708e81814ab88ca9 not found: ID does not exist" containerID="ba3253d794a64496ed2c8425703430c50e3657894edaf364708e81814ab88ca9" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.751543 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba3253d794a64496ed2c8425703430c50e3657894edaf364708e81814ab88ca9"} err="failed to get container status \"ba3253d794a64496ed2c8425703430c50e3657894edaf364708e81814ab88ca9\": rpc error: code = NotFound desc = could not find container \"ba3253d794a64496ed2c8425703430c50e3657894edaf364708e81814ab88ca9\": container with ID starting with ba3253d794a64496ed2c8425703430c50e3657894edaf364708e81814ab88ca9 not found: ID does not exist" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.751555 4768 scope.go:117] "RemoveContainer" containerID="7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab" Feb 23 18:50:32 crc kubenswrapper[4768]: E0223 18:50:32.754712 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 23 18:50:32 crc kubenswrapper[4768]: E0223 18:50:32.754820 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55pqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-zv7fq_openstack(d689e8c1-2c72-4fe1-890c-ba586628dd4b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 18:50:32 crc kubenswrapper[4768]: E0223 18:50:32.756485 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-zv7fq" podUID="d689e8c1-2c72-4fe1-890c-ba586628dd4b" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.815735 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-config\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.816081 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dwqc\" (UniqueName: \"kubernetes.io/projected/e9d24a80-bd92-4752-8786-147975b15301-kube-api-access-2dwqc\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.816119 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-httpd-config\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.816167 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-combined-ca-bundle\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.816212 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-ovndb-tls-certs\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.822135 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.822386 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-ovndb-tls-certs\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.822990 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-config\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.825892 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-combined-ca-bundle\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.826632 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-httpd-config\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.837884 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dwqc\" (UniqueName: \"kubernetes.io/projected/e9d24a80-bd92-4752-8786-147975b15301-kube-api-access-2dwqc\") pod \"neutron-86445c674d-k7fnl\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.840966 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84699c9d66-ghjfn" event={"ID":"c46ebaa2-3910-4025-8420-71eb83b3a909","Type":"ContainerStarted","Data":"3cb7e26cc90bc3c2d0930b14e25e4148ddb682d3c42cd6d599aeb15673afbc18"} Feb 23 18:50:32 crc kubenswrapper[4768]: E0223 18:50:32.843299 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-zv7fq" podUID="d689e8c1-2c72-4fe1-890c-ba586628dd4b" Feb 23 18:50:32 crc kubenswrapper[4768]: I0223 18:50:32.913983 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.277939 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.326378 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4be8edd4-c691-4b23-903c-467ffafb5f9f" path="/var/lib/kubelet/pods/4be8edd4-c691-4b23-903c-467ffafb5f9f/volumes" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.327606 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5823e392-a97a-4f29-a8a4-3dbfeb426417" path="/var/lib/kubelet/pods/5823e392-a97a-4f29-a8a4-3dbfeb426417/volumes" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.339007 4768 scope.go:117] "RemoveContainer" containerID="7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.449822 4768 scope.go:117] "RemoveContainer" containerID="7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab" Feb 23 18:50:33 crc kubenswrapper[4768]: E0223 18:50:33.450096 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab\": container with ID starting with 7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab not found: ID does not exist" containerID="7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.450120 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab"} err="failed to get container status \"7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab\": rpc error: code = NotFound desc = could not find container \"7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab\": container with ID starting with 7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab not found: ID does not exist" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.450144 4768 scope.go:117] "RemoveContainer" containerID="7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f" Feb 23 18:50:33 crc kubenswrapper[4768]: E0223 18:50:33.450523 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f\": container with ID starting with 7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f not found: ID does not exist" containerID="7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.450538 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f"} err="failed to get container status \"7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f\": rpc error: code = NotFound desc = could not find container \"7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f\": container with ID starting with 7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f not found: ID does not exist" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.450552 4768 scope.go:117] "RemoveContainer" containerID="7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.450714 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab"} err="failed to get container status \"7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab\": rpc error: code = NotFound desc = could not find container \"7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab\": container with ID starting with 7fa94826a8867a4cb880b41f721a3642db871d72de8dd953f3bf77c016ac8dab not found: ID does not exist" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.450727 4768 scope.go:117] "RemoveContainer" containerID="7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.450877 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f"} err="failed to get container status \"7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f\": rpc error: code = NotFound desc = could not find container \"7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f\": container with ID starting with 7540d2db250d109f2351d12928dc62f3ae6b57556d0a7c50cea8c8f693ce655f not found: ID does not exist" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.450890 4768 scope.go:117] "RemoveContainer" containerID="662c0ef856356498cd584cb766a97a6b53369859da285f23355df329a456b4b9" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.654565 4768 scope.go:117] "RemoveContainer" containerID="769f09a4884901dcd170f703ef0fd99d2cddcb08648f93f84af02f99099c5c65" Feb 23 18:50:33 crc kubenswrapper[4768]: E0223 18:50:33.680575 4768 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/dnsmasq-dns-698758b865-mclnm_openstack_dnsmasq-dns-769f09a4884901dcd170f703ef0fd99d2cddcb08648f93f84af02f99099c5c65.log: no such file or directory" path="/var/log/containers/dnsmasq-dns-698758b865-mclnm_openstack_dnsmasq-dns-769f09a4884901dcd170f703ef0fd99d2cddcb08648f93f84af02f99099c5c65.log" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.779649 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vnlkg"] Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.798040 4768 scope.go:117] "RemoveContainer" containerID="0194ceaed58441ba968a3dfbe2745a04807c293da7652b120cac5e8fff96b8e7" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.860333 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"68196292-495d-4c68-b2be-6a5be26281c0","Type":"ContainerStarted","Data":"8cdb26ff854ef5381a72deb2eb644f1c7eb33d68a017b2f964aad8031ff233ba"} Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.863515 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"45df64eeeccd82b6a979c0ae4c5ed47e40e22edac6d562f0aee3b3732227d91f"} Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.881839 4768 scope.go:117] "RemoveContainer" containerID="1d01c0da6ce43c9d8fdfa819f1c0db5cafa8e24965ee87731ae0d8710a40b9c4" Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.888341 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-845b48bb89-v6rjx" event={"ID":"6b4f1e75-6a30-4789-9b7f-85e92aed1581","Type":"ContainerStarted","Data":"326211414bab06de6e3e320987bf4657737969405d6bbe387618b2d5d5b871a3"} Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.917846 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kv44j"] Feb 23 18:50:33 crc kubenswrapper[4768]: I0223 18:50:33.986451 4768 scope.go:117] "RemoveContainer" containerID="8d74a47b1e678856d81c27679f1372f3ed5e10aea8a20ffae66c0aa3c877bd87" Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.056321 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-mclnm" podUID="5823e392-a97a-4f29-a8a4-3dbfeb426417" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: i/o timeout" Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.132784 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.189534 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-l2tjj"] Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.252382 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86445c674d-k7fnl"] Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.344079 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.932415 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a00c3dcd-826d-486b-9879-6e45d61a9907","Type":"ContainerStarted","Data":"53704387cafa39f95630254fa6bae4f5d153807ab77741f6ac413545908d0ce2"} Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.941757 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bc449878f-7drht" event={"ID":"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a","Type":"ContainerStarted","Data":"f9e3461a2c97be4605ebd45790a637f855c5964ae39517275b37c79e5e416163"} Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.950887 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-845b48bb89-v6rjx" event={"ID":"6b4f1e75-6a30-4789-9b7f-85e92aed1581","Type":"ContainerStarted","Data":"992edccbbd4dbced78c9aa11bebdb96c2b21132c6ee9a7b8bdb85168a1de4b46"} Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.952989 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-845b48bb89-v6rjx" podUID="6b4f1e75-6a30-4789-9b7f-85e92aed1581" containerName="horizon-log" containerID="cri-o://326211414bab06de6e3e320987bf4657737969405d6bbe387618b2d5d5b871a3" gracePeriod=30 Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.953714 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-845b48bb89-v6rjx" podUID="6b4f1e75-6a30-4789-9b7f-85e92aed1581" containerName="horizon" containerID="cri-o://992edccbbd4dbced78c9aa11bebdb96c2b21132c6ee9a7b8bdb85168a1de4b46" gracePeriod=30 Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.963391 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86445c674d-k7fnl" event={"ID":"e9d24a80-bd92-4752-8786-147975b15301","Type":"ContainerStarted","Data":"4fdcd00c6f8050d41022065c8ac3d5e39db2b0c4c92ee63384055d43d993f166"} Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.963442 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86445c674d-k7fnl" event={"ID":"e9d24a80-bd92-4752-8786-147975b15301","Type":"ContainerStarted","Data":"a5402175a7e7229c709e0919d9fc24caef055d03c856f1174d6560ef1eb2e702"} Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.966174 4768 generic.go:334] "Generic (PLEG): container finished" podID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" containerID="2057552940e16024c54a400a460ad657dadd3fae60903ad57b98c61599485e70" exitCode=0 Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.966220 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv44j" event={"ID":"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd","Type":"ContainerDied","Data":"2057552940e16024c54a400a460ad657dadd3fae60903ad57b98c61599485e70"} Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.966236 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv44j" event={"ID":"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd","Type":"ContainerStarted","Data":"3e688579097fba220fc0de064efffb49702d4886478ee2330bd09b50e0a01a86"} Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.985297 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40891100-89e6-4bd1-9ea0-8707548ffee8","Type":"ContainerStarted","Data":"4f9201dd6adf2ed18bc2268e671843218e9442058f4722cbdeef4c484ce86cf3"} Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.986711 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-845b48bb89-v6rjx" podStartSLOduration=4.84359684 podStartE2EDuration="34.986687818s" podCreationTimestamp="2026-02-23 18:50:00 +0000 UTC" firstStartedPulling="2026-02-23 18:50:03.090489398 +0000 UTC m=+998.480975198" lastFinishedPulling="2026-02-23 18:50:33.233580376 +0000 UTC m=+1028.624066176" observedRunningTime="2026-02-23 18:50:34.973135907 +0000 UTC m=+1030.363621717" watchObservedRunningTime="2026-02-23 18:50:34.986687818 +0000 UTC m=+1030.377173608" Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.989700 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vnlkg" event={"ID":"6f5f03e9-0a62-4567-93d2-5abbb7b89219","Type":"ContainerStarted","Data":"1654e674e595c8dbe8a19648fa9dfbd91bd5a475b5d43e64650b9e8dfe99478a"} Feb 23 18:50:34 crc kubenswrapper[4768]: I0223 18:50:34.989756 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vnlkg" event={"ID":"6f5f03e9-0a62-4567-93d2-5abbb7b89219","Type":"ContainerStarted","Data":"4d4e6c7842dd09fd28735732353b7e926bcaa330c91cb6b23f23b8c70ebb8025"} Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:34.999608 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84699c9d66-ghjfn" event={"ID":"c46ebaa2-3910-4025-8420-71eb83b3a909","Type":"ContainerStarted","Data":"5beaf90673a241480f2721b2cb11d0bf9f251a26131590b7450193ab00ec0e69"} Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.004488 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" event={"ID":"2534372c-ef07-45f2-917b-912873de873d","Type":"ContainerStarted","Data":"3effaf80267a1d3214f70419a3b7c84b8186557fea4beb0dcdb33a0c2c28d6de"} Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.020974 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67699f99c7-5rzsw" event={"ID":"7f393bd1-497e-4426-be4b-06f4c65f03f5","Type":"ContainerStarted","Data":"bfbd1b0852eb637126d18d1b2134229fd82aed1aa4505c42863b581f9d46b36b"} Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.021020 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67699f99c7-5rzsw" event={"ID":"7f393bd1-497e-4426-be4b-06f4c65f03f5","Type":"ContainerStarted","Data":"cba4c852570c7fb6a1f0f05588013260695c7b051a725fad0743d6f4e1f6dab8"} Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.021181 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67699f99c7-5rzsw" podUID="7f393bd1-497e-4426-be4b-06f4c65f03f5" containerName="horizon-log" containerID="cri-o://cba4c852570c7fb6a1f0f05588013260695c7b051a725fad0743d6f4e1f6dab8" gracePeriod=30 Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.021638 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-67699f99c7-5rzsw" podUID="7f393bd1-497e-4426-be4b-06f4c65f03f5" containerName="horizon" containerID="cri-o://bfbd1b0852eb637126d18d1b2134229fd82aed1aa4505c42863b581f9d46b36b" gracePeriod=30 Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.031688 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58cc9986b4-t7tcs" event={"ID":"5fe017d9-f16b-465c-97a0-ebe4466006f0","Type":"ContainerStarted","Data":"38ceb4058217c0ec406f6906361fb3fcf1347fbf8b5a7281ed9913fba53e9ba3"} Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.045947 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vnlkg" podStartSLOduration=23.045927462 podStartE2EDuration="23.045927462s" podCreationTimestamp="2026-02-23 18:50:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:35.013552315 +0000 UTC m=+1030.404038135" watchObservedRunningTime="2026-02-23 18:50:35.045927462 +0000 UTC m=+1030.436413262" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.047675 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"68196292-495d-4c68-b2be-6a5be26281c0","Type":"ContainerStarted","Data":"9f8efe6e80f54beb5fe08521101397fffab4b5b3e18db77df7fc9b94f097ee2e"} Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.078710 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66d66bdc85-82928"] Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.080197 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.083803 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.083984 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.085270 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-67699f99c7-5rzsw" podStartSLOduration=4.005829096 podStartE2EDuration="33.085239679s" podCreationTimestamp="2026-02-23 18:50:02 +0000 UTC" firstStartedPulling="2026-02-23 18:50:04.136182631 +0000 UTC m=+999.526668431" lastFinishedPulling="2026-02-23 18:50:33.215593214 +0000 UTC m=+1028.606079014" observedRunningTime="2026-02-23 18:50:35.050902498 +0000 UTC m=+1030.441388288" watchObservedRunningTime="2026-02-23 18:50:35.085239679 +0000 UTC m=+1030.475725479" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.101269 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66d66bdc85-82928"] Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.205087 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-httpd-config\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.205409 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm6x7\" (UniqueName: \"kubernetes.io/projected/ef28ba99-309b-4f67-bf0a-e9e22e3808db-kube-api-access-cm6x7\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.205456 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-config\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.205742 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-public-tls-certs\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.205900 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-internal-tls-certs\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.205988 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-ovndb-tls-certs\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.206107 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-combined-ca-bundle\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.308344 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-httpd-config\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.308397 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm6x7\" (UniqueName: \"kubernetes.io/projected/ef28ba99-309b-4f67-bf0a-e9e22e3808db-kube-api-access-cm6x7\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.308423 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-config\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.308477 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-public-tls-certs\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.308512 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-internal-tls-certs\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.308539 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-ovndb-tls-certs\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.308570 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-combined-ca-bundle\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.319766 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-config\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.324934 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-combined-ca-bundle\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.325154 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-ovndb-tls-certs\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.325450 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-public-tls-certs\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.363004 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-httpd-config\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.370046 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm6x7\" (UniqueName: \"kubernetes.io/projected/ef28ba99-309b-4f67-bf0a-e9e22e3808db-kube-api-access-cm6x7\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.373149 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-internal-tls-certs\") pod \"neutron-66d66bdc85-82928\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:35 crc kubenswrapper[4768]: I0223 18:50:35.508280 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.096719 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bc449878f-7drht" event={"ID":"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a","Type":"ContainerStarted","Data":"c569cc9ba619df1b2cada5105fae786aba2b94fd34ece8f1c107172ce3fc5e44"} Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.097640 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bc449878f-7drht" podUID="5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" containerName="horizon-log" containerID="cri-o://f9e3461a2c97be4605ebd45790a637f855c5964ae39517275b37c79e5e416163" gracePeriod=30 Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.098334 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-bc449878f-7drht" podUID="5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" containerName="horizon" containerID="cri-o://c569cc9ba619df1b2cada5105fae786aba2b94fd34ece8f1c107172ce3fc5e44" gracePeriod=30 Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.123759 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n58l7" event={"ID":"cd2ba036-bbca-4b94-8f72-70e252e5a2b9","Type":"ContainerStarted","Data":"96e5b527a04a8c17b4de12ae091944d1fbeb89be1a996a9621eaffc7ba3a3783"} Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.133761 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84699c9d66-ghjfn" event={"ID":"c46ebaa2-3910-4025-8420-71eb83b3a909","Type":"ContainerStarted","Data":"a8b896bc35a90342c52e7fd2aa30b84aefe074f3b241b438ecfa2e1f371e5920"} Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.152470 4768 generic.go:334] "Generic (PLEG): container finished" podID="2534372c-ef07-45f2-917b-912873de873d" containerID="37f8f9cd0693fcdf36364cb8f6d986e9e8ad77fd48d6881a99b6109e6cef4fde" exitCode=0 Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.153318 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" event={"ID":"2534372c-ef07-45f2-917b-912873de873d","Type":"ContainerDied","Data":"37f8f9cd0693fcdf36364cb8f6d986e9e8ad77fd48d6881a99b6109e6cef4fde"} Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.154770 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-bc449878f-7drht" podStartSLOduration=6.243659435 podStartE2EDuration="37.154749364s" podCreationTimestamp="2026-02-23 18:49:59 +0000 UTC" firstStartedPulling="2026-02-23 18:50:02.304544066 +0000 UTC m=+997.695029866" lastFinishedPulling="2026-02-23 18:50:33.215633995 +0000 UTC m=+1028.606119795" observedRunningTime="2026-02-23 18:50:36.150981251 +0000 UTC m=+1031.541467051" watchObservedRunningTime="2026-02-23 18:50:36.154749364 +0000 UTC m=+1031.545235164" Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.175381 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.178821 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-n58l7" podStartSLOduration=3.6891954399999998 podStartE2EDuration="36.178797704s" podCreationTimestamp="2026-02-23 18:50:00 +0000 UTC" firstStartedPulling="2026-02-23 18:50:02.304423303 +0000 UTC m=+997.694909103" lastFinishedPulling="2026-02-23 18:50:34.794025567 +0000 UTC m=+1030.184511367" observedRunningTime="2026-02-23 18:50:36.17646525 +0000 UTC m=+1031.566951050" watchObservedRunningTime="2026-02-23 18:50:36.178797704 +0000 UTC m=+1031.569283504" Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.197517 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-58cc9986b4-t7tcs" event={"ID":"5fe017d9-f16b-465c-97a0-ebe4466006f0","Type":"ContainerStarted","Data":"d9f0d372a8411ac1e70f174971e4c27cdeb1e93e2b1b639b2ff66ba66dee85fb"} Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.215127 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-84699c9d66-ghjfn" podStartSLOduration=26.215109879 podStartE2EDuration="26.215109879s" podCreationTimestamp="2026-02-23 18:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:36.207653294 +0000 UTC m=+1031.598139094" watchObservedRunningTime="2026-02-23 18:50:36.215109879 +0000 UTC m=+1031.605595679" Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.274716 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-58cc9986b4-t7tcs" podStartSLOduration=26.274696192 podStartE2EDuration="26.274696192s" podCreationTimestamp="2026-02-23 18:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:36.269359136 +0000 UTC m=+1031.659844926" watchObservedRunningTime="2026-02-23 18:50:36.274696192 +0000 UTC m=+1031.665181992" Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.325946 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86445c674d-k7fnl" podStartSLOduration=4.325918566 podStartE2EDuration="4.325918566s" podCreationTimestamp="2026-02-23 18:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:36.307217234 +0000 UTC m=+1031.697703034" watchObservedRunningTime="2026-02-23 18:50:36.325918566 +0000 UTC m=+1031.716404396" Feb 23 18:50:36 crc kubenswrapper[4768]: I0223 18:50:36.457969 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66d66bdc85-82928"] Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.232725 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86445c674d-k7fnl_e9d24a80-bd92-4752-8786-147975b15301/neutron-httpd/0.log" Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.239028 4768 generic.go:334] "Generic (PLEG): container finished" podID="e9d24a80-bd92-4752-8786-147975b15301" containerID="a79fd77f90e9220459e902aa071b52488c715e4407d3df3357a69dbc72e4b4fd" exitCode=1 Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.239117 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86445c674d-k7fnl" event={"ID":"e9d24a80-bd92-4752-8786-147975b15301","Type":"ContainerDied","Data":"a79fd77f90e9220459e902aa071b52488c715e4407d3df3357a69dbc72e4b4fd"} Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.240021 4768 scope.go:117] "RemoveContainer" containerID="a79fd77f90e9220459e902aa071b52488c715e4407d3df3357a69dbc72e4b4fd" Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.264142 4768 generic.go:334] "Generic (PLEG): container finished" podID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" containerID="63d40138d2eb716a1d2e40353196b5d9f35886644d690555e5057655ab22c04e" exitCode=0 Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.264231 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv44j" event={"ID":"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd","Type":"ContainerDied","Data":"63d40138d2eb716a1d2e40353196b5d9f35886644d690555e5057655ab22c04e"} Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.271444 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66d66bdc85-82928" event={"ID":"ef28ba99-309b-4f67-bf0a-e9e22e3808db","Type":"ContainerStarted","Data":"7d033198bb76fc4edd1bfd0d2bffdf9886d003f17371d30cbaffd3efdd8b90a3"} Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.271483 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66d66bdc85-82928" event={"ID":"ef28ba99-309b-4f67-bf0a-e9e22e3808db","Type":"ContainerStarted","Data":"90675218dc0f7add28852c31d0aef6e9a606107d12dc1878517659291260f06c"} Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.282554 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a00c3dcd-826d-486b-9879-6e45d61a9907","Type":"ContainerStarted","Data":"0b9a843ce48640fccbf8a6635c196c4a04c6f208b993f259909773d8e86e12eb"} Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.299594 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"68196292-495d-4c68-b2be-6a5be26281c0","Type":"ContainerStarted","Data":"0caacee10b039db6853d294feb43412aaf14149f5647aeea215487b8fb6ab45f"} Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.299742 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="68196292-495d-4c68-b2be-6a5be26281c0" containerName="glance-log" containerID="cri-o://9f8efe6e80f54beb5fe08521101397fffab4b5b3e18db77df7fc9b94f097ee2e" gracePeriod=30 Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.300068 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="68196292-495d-4c68-b2be-6a5be26281c0" containerName="glance-httpd" containerID="cri-o://0caacee10b039db6853d294feb43412aaf14149f5647aeea215487b8fb6ab45f" gracePeriod=30 Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.362611 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.362644 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" event={"ID":"2534372c-ef07-45f2-917b-912873de873d","Type":"ContainerStarted","Data":"4c522c96ef0f0450410afdc370df1bed13c1171cc78fc21bea2d3aac33fe2094"} Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.369926 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=30.369905211 podStartE2EDuration="30.369905211s" podCreationTimestamp="2026-02-23 18:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:37.329082372 +0000 UTC m=+1032.719568172" watchObservedRunningTime="2026-02-23 18:50:37.369905211 +0000 UTC m=+1032.760391011" Feb 23 18:50:37 crc kubenswrapper[4768]: I0223 18:50:37.372299 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" podStartSLOduration=5.372292407 podStartE2EDuration="5.372292407s" podCreationTimestamp="2026-02-23 18:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:37.347755574 +0000 UTC m=+1032.738241374" watchObservedRunningTime="2026-02-23 18:50:37.372292407 +0000 UTC m=+1032.762778207" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.196961 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.197757 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.338706 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66d66bdc85-82928" event={"ID":"ef28ba99-309b-4f67-bf0a-e9e22e3808db","Type":"ContainerStarted","Data":"4f6acda68047a479fd590a9e5cd03989179226b33f7b202e33f8e5fae69fcfaf"} Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.338838 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.341517 4768 generic.go:334] "Generic (PLEG): container finished" podID="68196292-495d-4c68-b2be-6a5be26281c0" containerID="0caacee10b039db6853d294feb43412aaf14149f5647aeea215487b8fb6ab45f" exitCode=0 Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.341550 4768 generic.go:334] "Generic (PLEG): container finished" podID="68196292-495d-4c68-b2be-6a5be26281c0" containerID="9f8efe6e80f54beb5fe08521101397fffab4b5b3e18db77df7fc9b94f097ee2e" exitCode=143 Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.341649 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"68196292-495d-4c68-b2be-6a5be26281c0","Type":"ContainerDied","Data":"0caacee10b039db6853d294feb43412aaf14149f5647aeea215487b8fb6ab45f"} Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.341724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"68196292-495d-4c68-b2be-6a5be26281c0","Type":"ContainerDied","Data":"9f8efe6e80f54beb5fe08521101397fffab4b5b3e18db77df7fc9b94f097ee2e"} Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.345632 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86445c674d-k7fnl_e9d24a80-bd92-4752-8786-147975b15301/neutron-httpd/1.log" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.346169 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86445c674d-k7fnl_e9d24a80-bd92-4752-8786-147975b15301/neutron-httpd/0.log" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.346513 4768 generic.go:334] "Generic (PLEG): container finished" podID="e9d24a80-bd92-4752-8786-147975b15301" containerID="d7d73b1431e0fe0f7c4afae7ab1e7a13f2f94ed9bd6b12ccde95845aca48e7e8" exitCode=1 Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.346578 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86445c674d-k7fnl" event={"ID":"e9d24a80-bd92-4752-8786-147975b15301","Type":"ContainerDied","Data":"d7d73b1431e0fe0f7c4afae7ab1e7a13f2f94ed9bd6b12ccde95845aca48e7e8"} Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.346622 4768 scope.go:117] "RemoveContainer" containerID="a79fd77f90e9220459e902aa071b52488c715e4407d3df3357a69dbc72e4b4fd" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.347414 4768 scope.go:117] "RemoveContainer" containerID="d7d73b1431e0fe0f7c4afae7ab1e7a13f2f94ed9bd6b12ccde95845aca48e7e8" Feb 23 18:50:38 crc kubenswrapper[4768]: E0223 18:50:38.347637 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-86445c674d-k7fnl_openstack(e9d24a80-bd92-4752-8786-147975b15301)\"" pod="openstack/neutron-86445c674d-k7fnl" podUID="e9d24a80-bd92-4752-8786-147975b15301" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.370395 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66d66bdc85-82928" podStartSLOduration=3.370376234 podStartE2EDuration="3.370376234s" podCreationTimestamp="2026-02-23 18:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:38.368132122 +0000 UTC m=+1033.758617922" watchObservedRunningTime="2026-02-23 18:50:38.370376234 +0000 UTC m=+1033.760862034" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.390380 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a00c3dcd-826d-486b-9879-6e45d61a9907","Type":"ContainerStarted","Data":"5404e02a66012b06ee00958e7e45bd26d45742029ca64e48b02155807c02ad62"} Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.489108 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kv44j" podStartSLOduration=21.645362653 podStartE2EDuration="24.489082858s" podCreationTimestamp="2026-02-23 18:50:14 +0000 UTC" firstStartedPulling="2026-02-23 18:50:34.970977778 +0000 UTC m=+1030.361463568" lastFinishedPulling="2026-02-23 18:50:37.814697973 +0000 UTC m=+1033.205183773" observedRunningTime="2026-02-23 18:50:38.48152461 +0000 UTC m=+1033.872010410" watchObservedRunningTime="2026-02-23 18:50:38.489082858 +0000 UTC m=+1033.879568658" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.490712 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.490704182 podStartE2EDuration="7.490704182s" podCreationTimestamp="2026-02-23 18:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:38.451573509 +0000 UTC m=+1033.842059329" watchObservedRunningTime="2026-02-23 18:50:38.490704182 +0000 UTC m=+1033.881189972" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.834854 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.984604 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68196292-495d-4c68-b2be-6a5be26281c0-httpd-run\") pod \"68196292-495d-4c68-b2be-6a5be26281c0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.984720 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-combined-ca-bundle\") pod \"68196292-495d-4c68-b2be-6a5be26281c0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.984761 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-scripts\") pod \"68196292-495d-4c68-b2be-6a5be26281c0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.984804 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68196292-495d-4c68-b2be-6a5be26281c0-logs\") pod \"68196292-495d-4c68-b2be-6a5be26281c0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.984924 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-config-data\") pod \"68196292-495d-4c68-b2be-6a5be26281c0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.985005 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxv89\" (UniqueName: \"kubernetes.io/projected/68196292-495d-4c68-b2be-6a5be26281c0-kube-api-access-sxv89\") pod \"68196292-495d-4c68-b2be-6a5be26281c0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.985083 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"68196292-495d-4c68-b2be-6a5be26281c0\" (UID: \"68196292-495d-4c68-b2be-6a5be26281c0\") " Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.985209 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68196292-495d-4c68-b2be-6a5be26281c0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "68196292-495d-4c68-b2be-6a5be26281c0" (UID: "68196292-495d-4c68-b2be-6a5be26281c0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.985401 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68196292-495d-4c68-b2be-6a5be26281c0-logs" (OuterVolumeSpecName: "logs") pod "68196292-495d-4c68-b2be-6a5be26281c0" (UID: "68196292-495d-4c68-b2be-6a5be26281c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.985891 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/68196292-495d-4c68-b2be-6a5be26281c0-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.985907 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68196292-495d-4c68-b2be-6a5be26281c0-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:38 crc kubenswrapper[4768]: I0223 18:50:38.996925 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68196292-495d-4c68-b2be-6a5be26281c0-kube-api-access-sxv89" (OuterVolumeSpecName: "kube-api-access-sxv89") pod "68196292-495d-4c68-b2be-6a5be26281c0" (UID: "68196292-495d-4c68-b2be-6a5be26281c0"). InnerVolumeSpecName "kube-api-access-sxv89". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.000523 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "68196292-495d-4c68-b2be-6a5be26281c0" (UID: "68196292-495d-4c68-b2be-6a5be26281c0"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.010875 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-scripts" (OuterVolumeSpecName: "scripts") pod "68196292-495d-4c68-b2be-6a5be26281c0" (UID: "68196292-495d-4c68-b2be-6a5be26281c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.063334 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68196292-495d-4c68-b2be-6a5be26281c0" (UID: "68196292-495d-4c68-b2be-6a5be26281c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.088608 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxv89\" (UniqueName: \"kubernetes.io/projected/68196292-495d-4c68-b2be-6a5be26281c0-kube-api-access-sxv89\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.088673 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.088687 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.088699 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.101754 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-config-data" (OuterVolumeSpecName: "config-data") pod "68196292-495d-4c68-b2be-6a5be26281c0" (UID: "68196292-495d-4c68-b2be-6a5be26281c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.128305 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.190412 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68196292-495d-4c68-b2be-6a5be26281c0-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.190442 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.432323 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86445c674d-k7fnl_e9d24a80-bd92-4752-8786-147975b15301/neutron-httpd/1.log" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.438779 4768 scope.go:117] "RemoveContainer" containerID="d7d73b1431e0fe0f7c4afae7ab1e7a13f2f94ed9bd6b12ccde95845aca48e7e8" Feb 23 18:50:39 crc kubenswrapper[4768]: E0223 18:50:39.440795 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 10s restarting failed container=neutron-httpd pod=neutron-86445c674d-k7fnl_openstack(e9d24a80-bd92-4752-8786-147975b15301)\"" pod="openstack/neutron-86445c674d-k7fnl" podUID="e9d24a80-bd92-4752-8786-147975b15301" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.460611 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv44j" event={"ID":"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd","Type":"ContainerStarted","Data":"7e27f793c5fa041075e84feb23c85a4fc819ef5d44075425346595d23c8eed1d"} Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.483525 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.484310 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"68196292-495d-4c68-b2be-6a5be26281c0","Type":"ContainerDied","Data":"8cdb26ff854ef5381a72deb2eb644f1c7eb33d68a017b2f964aad8031ff233ba"} Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.484380 4768 scope.go:117] "RemoveContainer" containerID="0caacee10b039db6853d294feb43412aaf14149f5647aeea215487b8fb6ab45f" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.523671 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.536136 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.546457 4768 scope.go:117] "RemoveContainer" containerID="9f8efe6e80f54beb5fe08521101397fffab4b5b3e18db77df7fc9b94f097ee2e" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.553290 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:39 crc kubenswrapper[4768]: E0223 18:50:39.553708 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68196292-495d-4c68-b2be-6a5be26281c0" containerName="glance-httpd" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.553726 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="68196292-495d-4c68-b2be-6a5be26281c0" containerName="glance-httpd" Feb 23 18:50:39 crc kubenswrapper[4768]: E0223 18:50:39.553744 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68196292-495d-4c68-b2be-6a5be26281c0" containerName="glance-log" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.553750 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="68196292-495d-4c68-b2be-6a5be26281c0" containerName="glance-log" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.553951 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="68196292-495d-4c68-b2be-6a5be26281c0" containerName="glance-httpd" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.553984 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="68196292-495d-4c68-b2be-6a5be26281c0" containerName="glance-log" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.554951 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.561593 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.561861 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.598270 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.719306 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.719356 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-logs\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.719388 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.719436 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vmfq\" (UniqueName: \"kubernetes.io/projected/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-kube-api-access-7vmfq\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.719460 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.719672 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.719740 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.719827 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.821860 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.821929 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-logs\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.821959 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.822006 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vmfq\" (UniqueName: \"kubernetes.io/projected/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-kube-api-access-7vmfq\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.822028 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.822057 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.822082 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.822114 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.823728 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.829054 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.829357 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-logs\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.830458 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.836860 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.842770 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.854750 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.858331 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vmfq\" (UniqueName: \"kubernetes.io/projected/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-kube-api-access-7vmfq\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.866497 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " pod="openstack/glance-default-external-api-0" Feb 23 18:50:39 crc kubenswrapper[4768]: I0223 18:50:39.899317 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:50:40 crc kubenswrapper[4768]: I0223 18:50:40.502219 4768 generic.go:334] "Generic (PLEG): container finished" podID="6f5f03e9-0a62-4567-93d2-5abbb7b89219" containerID="1654e674e595c8dbe8a19648fa9dfbd91bd5a475b5d43e64650b9e8dfe99478a" exitCode=0 Feb 23 18:50:40 crc kubenswrapper[4768]: I0223 18:50:40.502286 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vnlkg" event={"ID":"6f5f03e9-0a62-4567-93d2-5abbb7b89219","Type":"ContainerDied","Data":"1654e674e595c8dbe8a19648fa9dfbd91bd5a475b5d43e64650b9e8dfe99478a"} Feb 23 18:50:40 crc kubenswrapper[4768]: I0223 18:50:40.508878 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:50:40 crc kubenswrapper[4768]: I0223 18:50:40.587565 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-bc449878f-7drht" Feb 23 18:50:40 crc kubenswrapper[4768]: I0223 18:50:40.718798 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:40 crc kubenswrapper[4768]: I0223 18:50:40.718869 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:50:40 crc kubenswrapper[4768]: I0223 18:50:40.751391 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:40 crc kubenswrapper[4768]: I0223 18:50:40.751453 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:50:41 crc kubenswrapper[4768]: E0223 18:50:41.167711 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd2ba036_bbca_4b94_8f72_70e252e5a2b9.slice/crio-conmon-96e5b527a04a8c17b4de12ae091944d1fbeb89be1a996a9621eaffc7ba3a3783.scope\": RecentStats: unable to find data in memory cache]" Feb 23 18:50:41 crc kubenswrapper[4768]: I0223 18:50:41.322904 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68196292-495d-4c68-b2be-6a5be26281c0" path="/var/lib/kubelet/pods/68196292-495d-4c68-b2be-6a5be26281c0/volumes" Feb 23 18:50:41 crc kubenswrapper[4768]: I0223 18:50:41.522050 4768 generic.go:334] "Generic (PLEG): container finished" podID="cd2ba036-bbca-4b94-8f72-70e252e5a2b9" containerID="96e5b527a04a8c17b4de12ae091944d1fbeb89be1a996a9621eaffc7ba3a3783" exitCode=0 Feb 23 18:50:41 crc kubenswrapper[4768]: I0223 18:50:41.522338 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n58l7" event={"ID":"cd2ba036-bbca-4b94-8f72-70e252e5a2b9","Type":"ContainerDied","Data":"96e5b527a04a8c17b4de12ae091944d1fbeb89be1a996a9621eaffc7ba3a3783"} Feb 23 18:50:41 crc kubenswrapper[4768]: I0223 18:50:41.658461 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:50:42 crc kubenswrapper[4768]: I0223 18:50:42.249772 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:42 crc kubenswrapper[4768]: I0223 18:50:42.250498 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:42 crc kubenswrapper[4768]: I0223 18:50:42.313967 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:42 crc kubenswrapper[4768]: I0223 18:50:42.318280 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:42 crc kubenswrapper[4768]: I0223 18:50:42.530774 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:42 crc kubenswrapper[4768]: I0223 18:50:42.531125 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:42 crc kubenswrapper[4768]: I0223 18:50:42.828101 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:50:42 crc kubenswrapper[4768]: I0223 18:50:42.916080 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qbknd"] Feb 23 18:50:42 crc kubenswrapper[4768]: I0223 18:50:42.916588 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" podUID="21d677b5-cbc7-4501-addc-9e06c0bb8990" containerName="dnsmasq-dns" containerID="cri-o://ba052cf0e8dd4ed33bfc1a58960d20b7dfde90d61757b02ee78d2091e231ed48" gracePeriod=10 Feb 23 18:50:43 crc kubenswrapper[4768]: I0223 18:50:43.400409 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:50:43 crc kubenswrapper[4768]: I0223 18:50:43.543181 4768 generic.go:334] "Generic (PLEG): container finished" podID="21d677b5-cbc7-4501-addc-9e06c0bb8990" containerID="ba052cf0e8dd4ed33bfc1a58960d20b7dfde90d61757b02ee78d2091e231ed48" exitCode=0 Feb 23 18:50:43 crc kubenswrapper[4768]: I0223 18:50:43.543265 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" event={"ID":"21d677b5-cbc7-4501-addc-9e06c0bb8990","Type":"ContainerDied","Data":"ba052cf0e8dd4ed33bfc1a58960d20b7dfde90d61757b02ee78d2091e231ed48"} Feb 23 18:50:44 crc kubenswrapper[4768]: I0223 18:50:44.558214 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 18:50:44 crc kubenswrapper[4768]: I0223 18:50:44.558260 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 18:50:44 crc kubenswrapper[4768]: I0223 18:50:44.607141 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:50:44 crc kubenswrapper[4768]: I0223 18:50:44.607196 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:50:45 crc kubenswrapper[4768]: I0223 18:50:45.656416 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-kv44j" podUID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" containerName="registry-server" probeResult="failure" output=< Feb 23 18:50:45 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 23 18:50:45 crc kubenswrapper[4768]: > Feb 23 18:50:45 crc kubenswrapper[4768]: I0223 18:50:45.657206 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:45 crc kubenswrapper[4768]: I0223 18:50:45.657476 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 18:50:45 crc kubenswrapper[4768]: I0223 18:50:45.657798 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 18:50:46 crc kubenswrapper[4768]: I0223 18:50:46.332376 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" podUID="21d677b5-cbc7-4501-addc-9e06c0bb8990" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Feb 23 18:50:47 crc kubenswrapper[4768]: W0223 18:50:47.362856 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2a65a3f_ebd4_46e8_89bb_b402f6c91882.slice/crio-0d84735f71b9a31f0d68faf09e7f907a55a352501910b0bbf4d68bc4fc5d4512 WatchSource:0}: Error finding container 0d84735f71b9a31f0d68faf09e7f907a55a352501910b0bbf4d68bc4fc5d4512: Status 404 returned error can't find the container with id 0d84735f71b9a31f0d68faf09e7f907a55a352501910b0bbf4d68bc4fc5d4512 Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.477865 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.487628 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.524382 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-config-data\") pod \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.524452 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t58xn\" (UniqueName: \"kubernetes.io/projected/6f5f03e9-0a62-4567-93d2-5abbb7b89219-kube-api-access-t58xn\") pod \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.524521 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-scripts\") pod \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.528627 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-combined-ca-bundle\") pod \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.528715 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-combined-ca-bundle\") pod \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.528787 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-credential-keys\") pod \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.528862 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-logs\") pod \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.528912 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-scripts\") pod \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.528936 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkhn2\" (UniqueName: \"kubernetes.io/projected/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-kube-api-access-lkhn2\") pod \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.528960 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-config-data\") pod \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\" (UID: \"cd2ba036-bbca-4b94-8f72-70e252e5a2b9\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.529071 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-fernet-keys\") pod \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\" (UID: \"6f5f03e9-0a62-4567-93d2-5abbb7b89219\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.534602 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5f03e9-0a62-4567-93d2-5abbb7b89219-kube-api-access-t58xn" (OuterVolumeSpecName: "kube-api-access-t58xn") pod "6f5f03e9-0a62-4567-93d2-5abbb7b89219" (UID: "6f5f03e9-0a62-4567-93d2-5abbb7b89219"). InnerVolumeSpecName "kube-api-access-t58xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.534899 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-logs" (OuterVolumeSpecName: "logs") pod "cd2ba036-bbca-4b94-8f72-70e252e5a2b9" (UID: "cd2ba036-bbca-4b94-8f72-70e252e5a2b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.542860 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6f5f03e9-0a62-4567-93d2-5abbb7b89219" (UID: "6f5f03e9-0a62-4567-93d2-5abbb7b89219"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.554497 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-scripts" (OuterVolumeSpecName: "scripts") pod "6f5f03e9-0a62-4567-93d2-5abbb7b89219" (UID: "6f5f03e9-0a62-4567-93d2-5abbb7b89219"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.568630 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-scripts" (OuterVolumeSpecName: "scripts") pod "cd2ba036-bbca-4b94-8f72-70e252e5a2b9" (UID: "cd2ba036-bbca-4b94-8f72-70e252e5a2b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.568758 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-kube-api-access-lkhn2" (OuterVolumeSpecName: "kube-api-access-lkhn2") pod "cd2ba036-bbca-4b94-8f72-70e252e5a2b9" (UID: "cd2ba036-bbca-4b94-8f72-70e252e5a2b9"). InnerVolumeSpecName "kube-api-access-lkhn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.570164 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6f5f03e9-0a62-4567-93d2-5abbb7b89219" (UID: "6f5f03e9-0a62-4567-93d2-5abbb7b89219"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.620057 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n58l7" event={"ID":"cd2ba036-bbca-4b94-8f72-70e252e5a2b9","Type":"ContainerDied","Data":"1816da3801c98d4284aae9d84f440a7c958f594e2a23002641aae391b4a56d22"} Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.620169 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1816da3801c98d4284aae9d84f440a7c958f594e2a23002641aae391b4a56d22" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.620271 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n58l7" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.630515 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vnlkg" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.630510 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vnlkg" event={"ID":"6f5f03e9-0a62-4567-93d2-5abbb7b89219","Type":"ContainerDied","Data":"4d4e6c7842dd09fd28735732353b7e926bcaa330c91cb6b23f23b8c70ebb8025"} Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.630616 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d4e6c7842dd09fd28735732353b7e926bcaa330c91cb6b23f23b8c70ebb8025" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.631644 4768 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.631661 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t58xn\" (UniqueName: \"kubernetes.io/projected/6f5f03e9-0a62-4567-93d2-5abbb7b89219-kube-api-access-t58xn\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.631672 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.631681 4768 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.631690 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.631698 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.631706 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkhn2\" (UniqueName: \"kubernetes.io/projected/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-kube-api-access-lkhn2\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.635408 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd2ba036-bbca-4b94-8f72-70e252e5a2b9" (UID: "cd2ba036-bbca-4b94-8f72-70e252e5a2b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.640373 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2a65a3f-ebd4-46e8-89bb-b402f6c91882","Type":"ContainerStarted","Data":"0d84735f71b9a31f0d68faf09e7f907a55a352501910b0bbf4d68bc4fc5d4512"} Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.719405 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f5f03e9-0a62-4567-93d2-5abbb7b89219" (UID: "6f5f03e9-0a62-4567-93d2-5abbb7b89219"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.719816 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-config-data" (OuterVolumeSpecName: "config-data") pod "cd2ba036-bbca-4b94-8f72-70e252e5a2b9" (UID: "cd2ba036-bbca-4b94-8f72-70e252e5a2b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.734562 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.734595 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.734608 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd2ba036-bbca-4b94-8f72-70e252e5a2b9-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.791603 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-config-data" (OuterVolumeSpecName: "config-data") pod "6f5f03e9-0a62-4567-93d2-5abbb7b89219" (UID: "6f5f03e9-0a62-4567-93d2-5abbb7b89219"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.837040 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f5f03e9-0a62-4567-93d2-5abbb7b89219-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.872438 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.938292 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pw2c\" (UniqueName: \"kubernetes.io/projected/21d677b5-cbc7-4501-addc-9e06c0bb8990-kube-api-access-4pw2c\") pod \"21d677b5-cbc7-4501-addc-9e06c0bb8990\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.938390 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-ovsdbserver-sb\") pod \"21d677b5-cbc7-4501-addc-9e06c0bb8990\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.938438 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-ovsdbserver-nb\") pod \"21d677b5-cbc7-4501-addc-9e06c0bb8990\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.938538 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-dns-svc\") pod \"21d677b5-cbc7-4501-addc-9e06c0bb8990\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.938606 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-dns-swift-storage-0\") pod \"21d677b5-cbc7-4501-addc-9e06c0bb8990\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.938657 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-config\") pod \"21d677b5-cbc7-4501-addc-9e06c0bb8990\" (UID: \"21d677b5-cbc7-4501-addc-9e06c0bb8990\") " Feb 23 18:50:47 crc kubenswrapper[4768]: I0223 18:50:47.947708 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d677b5-cbc7-4501-addc-9e06c0bb8990-kube-api-access-4pw2c" (OuterVolumeSpecName: "kube-api-access-4pw2c") pod "21d677b5-cbc7-4501-addc-9e06c0bb8990" (UID: "21d677b5-cbc7-4501-addc-9e06c0bb8990"). InnerVolumeSpecName "kube-api-access-4pw2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.008999 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "21d677b5-cbc7-4501-addc-9e06c0bb8990" (UID: "21d677b5-cbc7-4501-addc-9e06c0bb8990"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.010270 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-config" (OuterVolumeSpecName: "config") pod "21d677b5-cbc7-4501-addc-9e06c0bb8990" (UID: "21d677b5-cbc7-4501-addc-9e06c0bb8990"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.019821 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "21d677b5-cbc7-4501-addc-9e06c0bb8990" (UID: "21d677b5-cbc7-4501-addc-9e06c0bb8990"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.039467 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "21d677b5-cbc7-4501-addc-9e06c0bb8990" (UID: "21d677b5-cbc7-4501-addc-9e06c0bb8990"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.041102 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.041133 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.041164 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pw2c\" (UniqueName: \"kubernetes.io/projected/21d677b5-cbc7-4501-addc-9e06c0bb8990-kube-api-access-4pw2c\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.041176 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.041187 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.041531 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "21d677b5-cbc7-4501-addc-9e06c0bb8990" (UID: "21d677b5-cbc7-4501-addc-9e06c0bb8990"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.144414 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/21d677b5-cbc7-4501-addc-9e06c0bb8990-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.667743 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2a65a3f-ebd4-46e8-89bb-b402f6c91882","Type":"ContainerStarted","Data":"15c1614d36dd8ecc78ee7f1c94b97cc778d50e7bb764aacf93e5f08ca2d1cfad"} Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.680476 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hcnm6" event={"ID":"6f6df03b-46d7-4b9e-a9cd-949eca9bf718","Type":"ContainerStarted","Data":"4884ef943d4fbca17aa68e175d80de9e8f4e32368167654f49b4b864d3ac8008"} Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.695539 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6cdff58f68-7n8ch"] Feb 23 18:50:48 crc kubenswrapper[4768]: E0223 18:50:48.696113 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd2ba036-bbca-4b94-8f72-70e252e5a2b9" containerName="placement-db-sync" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.696136 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd2ba036-bbca-4b94-8f72-70e252e5a2b9" containerName="placement-db-sync" Feb 23 18:50:48 crc kubenswrapper[4768]: E0223 18:50:48.696160 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d677b5-cbc7-4501-addc-9e06c0bb8990" containerName="dnsmasq-dns" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.696168 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d677b5-cbc7-4501-addc-9e06c0bb8990" containerName="dnsmasq-dns" Feb 23 18:50:48 crc kubenswrapper[4768]: E0223 18:50:48.696195 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5f03e9-0a62-4567-93d2-5abbb7b89219" containerName="keystone-bootstrap" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.696202 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5f03e9-0a62-4567-93d2-5abbb7b89219" containerName="keystone-bootstrap" Feb 23 18:50:48 crc kubenswrapper[4768]: E0223 18:50:48.696224 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d677b5-cbc7-4501-addc-9e06c0bb8990" containerName="init" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.696230 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d677b5-cbc7-4501-addc-9e06c0bb8990" containerName="init" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.696445 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d677b5-cbc7-4501-addc-9e06c0bb8990" containerName="dnsmasq-dns" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.696468 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5f03e9-0a62-4567-93d2-5abbb7b89219" containerName="keystone-bootstrap" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.696490 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd2ba036-bbca-4b94-8f72-70e252e5a2b9" containerName="placement-db-sync" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.697811 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.703681 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-sd75x" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.704021 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.704110 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.704460 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.705485 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40891100-89e6-4bd1-9ea0-8707548ffee8","Type":"ContainerStarted","Data":"a1d60d686b6efc7feaff457befce0ee53193aaa2baddae35f0c7e0e5de401a19"} Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.713320 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7f7bc597d-jphlt"] Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.714586 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.714832 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qbknd" event={"ID":"21d677b5-cbc7-4501-addc-9e06c0bb8990","Type":"ContainerDied","Data":"8eeab25e22bc469e9a8b3eb90672518be6ddf6333726526d8f0e37cb6e4ad28c"} Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.714885 4768 scope.go:117] "RemoveContainer" containerID="ba052cf0e8dd4ed33bfc1a58960d20b7dfde90d61757b02ee78d2091e231ed48" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.715040 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.717599 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.718400 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.718671 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.718868 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.718981 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-ftws5" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.719033 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.728856 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.763772 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f7bc597d-jphlt"] Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.766520 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-credential-keys\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.766595 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-config-data\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.766626 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-combined-ca-bundle\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.766710 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-config-data\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.766764 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-fernet-keys\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.766833 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw784\" (UniqueName: \"kubernetes.io/projected/a54b90e0-5929-42b7-94bc-8eb916ce8bde-kube-api-access-tw784\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.766943 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a54b90e0-5929-42b7-94bc-8eb916ce8bde-logs\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.767051 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlvkn\" (UniqueName: \"kubernetes.io/projected/f3305106-4005-472a-980a-3030ee27d1bb-kube-api-access-zlvkn\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.767122 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-internal-tls-certs\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.767142 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-scripts\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.767162 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-combined-ca-bundle\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.767239 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-scripts\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.767375 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-public-tls-certs\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.767408 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-public-tls-certs\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.771792 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-internal-tls-certs\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.789390 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cdff58f68-7n8ch"] Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.790410 4768 scope.go:117] "RemoveContainer" containerID="abe14ab1439a652d45093c4365ac8e945e67ea74fec2d9ec4e46c5abcb408834" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.806664 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-hcnm6" podStartSLOduration=3.921208898 podStartE2EDuration="48.806630477s" podCreationTimestamp="2026-02-23 18:50:00 +0000 UTC" firstStartedPulling="2026-02-23 18:50:02.701091795 +0000 UTC m=+998.091577595" lastFinishedPulling="2026-02-23 18:50:47.586513374 +0000 UTC m=+1042.976999174" observedRunningTime="2026-02-23 18:50:48.707930931 +0000 UTC m=+1044.098416731" watchObservedRunningTime="2026-02-23 18:50:48.806630477 +0000 UTC m=+1044.197116277" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873316 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw784\" (UniqueName: \"kubernetes.io/projected/a54b90e0-5929-42b7-94bc-8eb916ce8bde-kube-api-access-tw784\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873400 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a54b90e0-5929-42b7-94bc-8eb916ce8bde-logs\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873441 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlvkn\" (UniqueName: \"kubernetes.io/projected/f3305106-4005-472a-980a-3030ee27d1bb-kube-api-access-zlvkn\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873466 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-internal-tls-certs\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873482 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-scripts\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873497 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-combined-ca-bundle\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873520 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-scripts\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873554 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-public-tls-certs\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873573 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-public-tls-certs\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873596 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-internal-tls-certs\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873622 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-credential-keys\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873650 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-config-data\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873668 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-combined-ca-bundle\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873698 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-config-data\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.873720 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-fernet-keys\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.874456 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qbknd"] Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.880944 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-fernet-keys\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.882179 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a54b90e0-5929-42b7-94bc-8eb916ce8bde-logs\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.883492 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-scripts\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.886651 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-config-data\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.889570 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-internal-tls-certs\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.891563 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qbknd"] Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.894825 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-combined-ca-bundle\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.894959 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-config-data\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.898395 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-public-tls-certs\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.898768 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-credential-keys\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.899870 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-combined-ca-bundle\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.905729 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-scripts\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.906914 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-public-tls-certs\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.907509 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3305106-4005-472a-980a-3030ee27d1bb-internal-tls-certs\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.913593 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlvkn\" (UniqueName: \"kubernetes.io/projected/f3305106-4005-472a-980a-3030ee27d1bb-kube-api-access-zlvkn\") pod \"keystone-7f7bc597d-jphlt\" (UID: \"f3305106-4005-472a-980a-3030ee27d1bb\") " pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:48 crc kubenswrapper[4768]: I0223 18:50:48.926017 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw784\" (UniqueName: \"kubernetes.io/projected/a54b90e0-5929-42b7-94bc-8eb916ce8bde-kube-api-access-tw784\") pod \"placement-6cdff58f68-7n8ch\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.055720 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.059683 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.339706 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d677b5-cbc7-4501-addc-9e06c0bb8990" path="/var/lib/kubelet/pods/21d677b5-cbc7-4501-addc-9e06c0bb8990/volumes" Feb 23 18:50:49 crc kubenswrapper[4768]: W0223 18:50:49.626409 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda54b90e0_5929_42b7_94bc_8eb916ce8bde.slice/crio-4c6b9d20115e3e98a712fb9de38bfc508bd98ce63d226cbf4d64a29ffb333f51 WatchSource:0}: Error finding container 4c6b9d20115e3e98a712fb9de38bfc508bd98ce63d226cbf4d64a29ffb333f51: Status 404 returned error can't find the container with id 4c6b9d20115e3e98a712fb9de38bfc508bd98ce63d226cbf4d64a29ffb333f51 Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.634306 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6cdff58f68-7n8ch"] Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.737429 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f7bc597d-jphlt"] Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.783539 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cdff58f68-7n8ch" event={"ID":"a54b90e0-5929-42b7-94bc-8eb916ce8bde","Type":"ContainerStarted","Data":"4c6b9d20115e3e98a712fb9de38bfc508bd98ce63d226cbf4d64a29ffb333f51"} Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.816682 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2a65a3f-ebd4-46e8-89bb-b402f6c91882","Type":"ContainerStarted","Data":"53265b5944dd1c60e19002000fb3ea2ae2d8f8c9077e7fd5f53c3dee03a143c3"} Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.837205 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zv7fq" event={"ID":"d689e8c1-2c72-4fe1-890c-ba586628dd4b","Type":"ContainerStarted","Data":"98aebede44299fee775fd2b2371373a24ef04409aeb5042213d336a34d8b7012"} Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.863506 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.863486625 podStartE2EDuration="10.863486625s" podCreationTimestamp="2026-02-23 18:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:49.844524445 +0000 UTC m=+1045.235010255" watchObservedRunningTime="2026-02-23 18:50:49.863486625 +0000 UTC m=+1045.253972425" Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.885442 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-zv7fq" podStartSLOduration=5.009097278 podStartE2EDuration="50.885418856s" podCreationTimestamp="2026-02-23 18:49:59 +0000 UTC" firstStartedPulling="2026-02-23 18:50:01.698556417 +0000 UTC m=+997.089042207" lastFinishedPulling="2026-02-23 18:50:47.574877985 +0000 UTC m=+1042.965363785" observedRunningTime="2026-02-23 18:50:49.873743076 +0000 UTC m=+1045.264228876" watchObservedRunningTime="2026-02-23 18:50:49.885418856 +0000 UTC m=+1045.275904656" Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.901141 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.901185 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.949956 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 18:50:49 crc kubenswrapper[4768]: I0223 18:50:49.977692 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.580418 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-574fcfd8cb-8sv54"] Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.582281 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.613309 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-574fcfd8cb-8sv54"] Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.652383 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-combined-ca-bundle\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.652770 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7nbq\" (UniqueName: \"kubernetes.io/projected/77c8192d-2048-476f-af50-d65602ec4d05-kube-api-access-x7nbq\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.652994 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-public-tls-certs\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.653077 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77c8192d-2048-476f-af50-d65602ec4d05-logs\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.653348 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-internal-tls-certs\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.653404 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-config-data\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.653434 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-scripts\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.720812 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-84699c9d66-ghjfn" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.755429 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77c8192d-2048-476f-af50-d65602ec4d05-logs\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.755542 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-internal-tls-certs\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.755615 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-config-data\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.755644 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-scripts\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.755690 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-combined-ca-bundle\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.755728 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7nbq\" (UniqueName: \"kubernetes.io/projected/77c8192d-2048-476f-af50-d65602ec4d05-kube-api-access-x7nbq\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.755765 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-public-tls-certs\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.759205 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-58cc9986b4-t7tcs" podUID="5fe017d9-f16b-465c-97a0-ebe4466006f0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.759666 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77c8192d-2048-476f-af50-d65602ec4d05-logs\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.766774 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-scripts\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.773948 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-combined-ca-bundle\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.774070 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-config-data\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.775155 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-public-tls-certs\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.776895 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77c8192d-2048-476f-af50-d65602ec4d05-internal-tls-certs\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.779433 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7nbq\" (UniqueName: \"kubernetes.io/projected/77c8192d-2048-476f-af50-d65602ec4d05-kube-api-access-x7nbq\") pod \"placement-574fcfd8cb-8sv54\" (UID: \"77c8192d-2048-476f-af50-d65602ec4d05\") " pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.861531 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f7bc597d-jphlt" event={"ID":"f3305106-4005-472a-980a-3030ee27d1bb","Type":"ContainerStarted","Data":"da30e2990ba1be0dbd691811d07e8e9b1a248e5f8306fda27ede8c7980e3f8fb"} Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.861581 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f7bc597d-jphlt" event={"ID":"f3305106-4005-472a-980a-3030ee27d1bb","Type":"ContainerStarted","Data":"a5044c15d1faf1e9b391426911c3cb7bfdcee0e690566f59b9693ea0f2d0dc8a"} Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.862561 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.879482 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cdff58f68-7n8ch" event={"ID":"a54b90e0-5929-42b7-94bc-8eb916ce8bde","Type":"ContainerStarted","Data":"b3b281fb91b9a51cf32e69a11331e5cb0b62fa031b0026402ec1ee29425193c9"} Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.879556 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.879570 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cdff58f68-7n8ch" event={"ID":"a54b90e0-5929-42b7-94bc-8eb916ce8bde","Type":"ContainerStarted","Data":"020d0d98589f3508180b9e8f1cc77361ae54bab645d684cbdb0d76775d09bb3c"} Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.879580 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.879616 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.879730 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.891979 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7f7bc597d-jphlt" podStartSLOduration=2.891954925 podStartE2EDuration="2.891954925s" podCreationTimestamp="2026-02-23 18:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:50.883422351 +0000 UTC m=+1046.273908151" watchObservedRunningTime="2026-02-23 18:50:50.891954925 +0000 UTC m=+1046.282440745" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.909167 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6cdff58f68-7n8ch" podStartSLOduration=2.909149676 podStartE2EDuration="2.909149676s" podCreationTimestamp="2026-02-23 18:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:50.908893389 +0000 UTC m=+1046.299379189" watchObservedRunningTime="2026-02-23 18:50:50.909149676 +0000 UTC m=+1046.299635476" Feb 23 18:50:50 crc kubenswrapper[4768]: I0223 18:50:50.954652 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:51 crc kubenswrapper[4768]: I0223 18:50:51.459920 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-574fcfd8cb-8sv54"] Feb 23 18:50:51 crc kubenswrapper[4768]: W0223 18:50:51.475369 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77c8192d_2048_476f_af50_d65602ec4d05.slice/crio-4fc792f90156f3967dfd898f164951237f47bec94d62ce49af4f4c9ea2ea3992 WatchSource:0}: Error finding container 4fc792f90156f3967dfd898f164951237f47bec94d62ce49af4f4c9ea2ea3992: Status 404 returned error can't find the container with id 4fc792f90156f3967dfd898f164951237f47bec94d62ce49af4f4c9ea2ea3992 Feb 23 18:50:51 crc kubenswrapper[4768]: I0223 18:50:51.904736 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-574fcfd8cb-8sv54" event={"ID":"77c8192d-2048-476f-af50-d65602ec4d05","Type":"ContainerStarted","Data":"1e2204cd9db69388dde7f5a41d1ab691a5a3b3382e628cebcb617b403703eaed"} Feb 23 18:50:51 crc kubenswrapper[4768]: I0223 18:50:51.905163 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-574fcfd8cb-8sv54" event={"ID":"77c8192d-2048-476f-af50-d65602ec4d05","Type":"ContainerStarted","Data":"4fc792f90156f3967dfd898f164951237f47bec94d62ce49af4f4c9ea2ea3992"} Feb 23 18:50:52 crc kubenswrapper[4768]: I0223 18:50:52.916949 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-574fcfd8cb-8sv54" event={"ID":"77c8192d-2048-476f-af50-d65602ec4d05","Type":"ContainerStarted","Data":"7b235c934aef780b40d0a2c8cd9770935939972989dac16884dc42a1e66d60e9"} Feb 23 18:50:52 crc kubenswrapper[4768]: I0223 18:50:52.917327 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:52 crc kubenswrapper[4768]: I0223 18:50:52.917345 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:50:52 crc kubenswrapper[4768]: I0223 18:50:52.920306 4768 generic.go:334] "Generic (PLEG): container finished" podID="6f6df03b-46d7-4b9e-a9cd-949eca9bf718" containerID="4884ef943d4fbca17aa68e175d80de9e8f4e32368167654f49b4b864d3ac8008" exitCode=0 Feb 23 18:50:52 crc kubenswrapper[4768]: I0223 18:50:52.920861 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hcnm6" event={"ID":"6f6df03b-46d7-4b9e-a9cd-949eca9bf718","Type":"ContainerDied","Data":"4884ef943d4fbca17aa68e175d80de9e8f4e32368167654f49b4b864d3ac8008"} Feb 23 18:50:52 crc kubenswrapper[4768]: I0223 18:50:52.951746 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-574fcfd8cb-8sv54" podStartSLOduration=2.951727192 podStartE2EDuration="2.951727192s" podCreationTimestamp="2026-02-23 18:50:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:50:52.933500183 +0000 UTC m=+1048.323985983" watchObservedRunningTime="2026-02-23 18:50:52.951727192 +0000 UTC m=+1048.342212992" Feb 23 18:50:53 crc kubenswrapper[4768]: I0223 18:50:53.308350 4768 scope.go:117] "RemoveContainer" containerID="d7d73b1431e0fe0f7c4afae7ab1e7a13f2f94ed9bd6b12ccde95845aca48e7e8" Feb 23 18:50:53 crc kubenswrapper[4768]: I0223 18:50:53.936501 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86445c674d-k7fnl_e9d24a80-bd92-4752-8786-147975b15301/neutron-httpd/2.log" Feb 23 18:50:53 crc kubenswrapper[4768]: I0223 18:50:53.937434 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86445c674d-k7fnl_e9d24a80-bd92-4752-8786-147975b15301/neutron-httpd/1.log" Feb 23 18:50:53 crc kubenswrapper[4768]: I0223 18:50:53.937802 4768 generic.go:334] "Generic (PLEG): container finished" podID="e9d24a80-bd92-4752-8786-147975b15301" containerID="8df7addf879faa1157ac93cad44dc2f5410a91d009a7a5a0ea5851cd81e98d8c" exitCode=1 Feb 23 18:50:53 crc kubenswrapper[4768]: I0223 18:50:53.937894 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86445c674d-k7fnl" event={"ID":"e9d24a80-bd92-4752-8786-147975b15301","Type":"ContainerDied","Data":"8df7addf879faa1157ac93cad44dc2f5410a91d009a7a5a0ea5851cd81e98d8c"} Feb 23 18:50:53 crc kubenswrapper[4768]: I0223 18:50:53.937977 4768 scope.go:117] "RemoveContainer" containerID="d7d73b1431e0fe0f7c4afae7ab1e7a13f2f94ed9bd6b12ccde95845aca48e7e8" Feb 23 18:50:53 crc kubenswrapper[4768]: I0223 18:50:53.938538 4768 scope.go:117] "RemoveContainer" containerID="8df7addf879faa1157ac93cad44dc2f5410a91d009a7a5a0ea5851cd81e98d8c" Feb 23 18:50:53 crc kubenswrapper[4768]: E0223 18:50:53.938877 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-86445c674d-k7fnl_openstack(e9d24a80-bd92-4752-8786-147975b15301)\"" pod="openstack/neutron-86445c674d-k7fnl" podUID="e9d24a80-bd92-4752-8786-147975b15301" Feb 23 18:50:54 crc kubenswrapper[4768]: I0223 18:50:54.950112 4768 generic.go:334] "Generic (PLEG): container finished" podID="d689e8c1-2c72-4fe1-890c-ba586628dd4b" containerID="98aebede44299fee775fd2b2371373a24ef04409aeb5042213d336a34d8b7012" exitCode=0 Feb 23 18:50:54 crc kubenswrapper[4768]: I0223 18:50:54.950227 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zv7fq" event={"ID":"d689e8c1-2c72-4fe1-890c-ba586628dd4b","Type":"ContainerDied","Data":"98aebede44299fee775fd2b2371373a24ef04409aeb5042213d336a34d8b7012"} Feb 23 18:50:55 crc kubenswrapper[4768]: I0223 18:50:55.667752 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-kv44j" podUID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" containerName="registry-server" probeResult="failure" output=< Feb 23 18:50:55 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 23 18:50:55 crc kubenswrapper[4768]: > Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.632126 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.662204 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-combined-ca-bundle\") pod \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.663426 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-config-data\") pod \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.663676 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d689e8c1-2c72-4fe1-890c-ba586628dd4b-etc-machine-id\") pod \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.663739 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55pqc\" (UniqueName: \"kubernetes.io/projected/d689e8c1-2c72-4fe1-890c-ba586628dd4b-kube-api-access-55pqc\") pod \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.663853 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-scripts\") pod \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.663909 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-db-sync-config-data\") pod \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\" (UID: \"d689e8c1-2c72-4fe1-890c-ba586628dd4b\") " Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.664319 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d689e8c1-2c72-4fe1-890c-ba586628dd4b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d689e8c1-2c72-4fe1-890c-ba586628dd4b" (UID: "d689e8c1-2c72-4fe1-890c-ba586628dd4b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.664517 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d689e8c1-2c72-4fe1-890c-ba586628dd4b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.670620 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d689e8c1-2c72-4fe1-890c-ba586628dd4b" (UID: "d689e8c1-2c72-4fe1-890c-ba586628dd4b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.676309 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d689e8c1-2c72-4fe1-890c-ba586628dd4b-kube-api-access-55pqc" (OuterVolumeSpecName: "kube-api-access-55pqc") pod "d689e8c1-2c72-4fe1-890c-ba586628dd4b" (UID: "d689e8c1-2c72-4fe1-890c-ba586628dd4b"). InnerVolumeSpecName "kube-api-access-55pqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.690649 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-scripts" (OuterVolumeSpecName: "scripts") pod "d689e8c1-2c72-4fe1-890c-ba586628dd4b" (UID: "d689e8c1-2c72-4fe1-890c-ba586628dd4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.734674 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-config-data" (OuterVolumeSpecName: "config-data") pod "d689e8c1-2c72-4fe1-890c-ba586628dd4b" (UID: "d689e8c1-2c72-4fe1-890c-ba586628dd4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.746440 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d689e8c1-2c72-4fe1-890c-ba586628dd4b" (UID: "d689e8c1-2c72-4fe1-890c-ba586628dd4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.766698 4768 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.766735 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.766748 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.766760 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55pqc\" (UniqueName: \"kubernetes.io/projected/d689e8c1-2c72-4fe1-890c-ba586628dd4b-kube-api-access-55pqc\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:58 crc kubenswrapper[4768]: I0223 18:50:58.766775 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d689e8c1-2c72-4fe1-890c-ba586628dd4b-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:59 crc kubenswrapper[4768]: I0223 18:50:59.014535 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zv7fq" event={"ID":"d689e8c1-2c72-4fe1-890c-ba586628dd4b","Type":"ContainerDied","Data":"77b1530100971f706edd575769ad7ed83d77de4da6bc802073a70ccd05d5228c"} Feb 23 18:50:59 crc kubenswrapper[4768]: I0223 18:50:59.014597 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77b1530100971f706edd575769ad7ed83d77de4da6bc802073a70ccd05d5228c" Feb 23 18:50:59 crc kubenswrapper[4768]: I0223 18:50:59.014627 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zv7fq" Feb 23 18:50:59 crc kubenswrapper[4768]: I0223 18:50:59.235726 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hcnm6" Feb 23 18:50:59 crc kubenswrapper[4768]: I0223 18:50:59.276401 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-combined-ca-bundle\") pod \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\" (UID: \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\") " Feb 23 18:50:59 crc kubenswrapper[4768]: I0223 18:50:59.276539 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-db-sync-config-data\") pod \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\" (UID: \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\") " Feb 23 18:50:59 crc kubenswrapper[4768]: I0223 18:50:59.276672 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb6q9\" (UniqueName: \"kubernetes.io/projected/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-kube-api-access-mb6q9\") pod \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\" (UID: \"6f6df03b-46d7-4b9e-a9cd-949eca9bf718\") " Feb 23 18:50:59 crc kubenswrapper[4768]: I0223 18:50:59.280647 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-kube-api-access-mb6q9" (OuterVolumeSpecName: "kube-api-access-mb6q9") pod "6f6df03b-46d7-4b9e-a9cd-949eca9bf718" (UID: "6f6df03b-46d7-4b9e-a9cd-949eca9bf718"). InnerVolumeSpecName "kube-api-access-mb6q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:50:59 crc kubenswrapper[4768]: I0223 18:50:59.285886 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6f6df03b-46d7-4b9e-a9cd-949eca9bf718" (UID: "6f6df03b-46d7-4b9e-a9cd-949eca9bf718"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:59 crc kubenswrapper[4768]: I0223 18:50:59.335805 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f6df03b-46d7-4b9e-a9cd-949eca9bf718" (UID: "6f6df03b-46d7-4b9e-a9cd-949eca9bf718"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:50:59 crc kubenswrapper[4768]: I0223 18:50:59.389801 4768 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:59 crc kubenswrapper[4768]: I0223 18:50:59.389836 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb6q9\" (UniqueName: \"kubernetes.io/projected/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-kube-api-access-mb6q9\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:59 crc kubenswrapper[4768]: I0223 18:50:59.389847 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6df03b-46d7-4b9e-a9cd-949eca9bf718-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:50:59 crc kubenswrapper[4768]: E0223 18:50:59.507087 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.061967 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hcnm6" event={"ID":"6f6df03b-46d7-4b9e-a9cd-949eca9bf718","Type":"ContainerDied","Data":"e3bbe1fd3870d6223df96bac0da09bff37ba7b3c6696f72fc0eef38c7b17a176"} Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.062007 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3bbe1fd3870d6223df96bac0da09bff37ba7b3c6696f72fc0eef38c7b17a176" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.062075 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hcnm6" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.074065 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 18:51:00 crc kubenswrapper[4768]: E0223 18:51:00.074510 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6df03b-46d7-4b9e-a9cd-949eca9bf718" containerName="barbican-db-sync" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.074523 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6df03b-46d7-4b9e-a9cd-949eca9bf718" containerName="barbican-db-sync" Feb 23 18:51:00 crc kubenswrapper[4768]: E0223 18:51:00.074553 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d689e8c1-2c72-4fe1-890c-ba586628dd4b" containerName="cinder-db-sync" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.074559 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d689e8c1-2c72-4fe1-890c-ba586628dd4b" containerName="cinder-db-sync" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.074753 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6df03b-46d7-4b9e-a9cd-949eca9bf718" containerName="barbican-db-sync" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.074767 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d689e8c1-2c72-4fe1-890c-ba586628dd4b" containerName="cinder-db-sync" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.076079 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.080551 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.080749 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.080874 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.082614 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-mxzv8" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.083972 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.086440 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40891100-89e6-4bd1-9ea0-8707548ffee8","Type":"ContainerStarted","Data":"f21e5e9db4adf7f22536ac20d3fbeca07bba909acfbb65f17802d715aedf6e9d"} Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.086592 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerName="ceilometer-notification-agent" containerID="cri-o://4f9201dd6adf2ed18bc2268e671843218e9442058f4722cbdeef4c484ce86cf3" gracePeriod=30 Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.086660 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.086699 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerName="proxy-httpd" containerID="cri-o://f21e5e9db4adf7f22536ac20d3fbeca07bba909acfbb65f17802d715aedf6e9d" gracePeriod=30 Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.086733 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerName="sg-core" containerID="cri-o://a1d60d686b6efc7feaff457befce0ee53193aaa2baddae35f0c7e0e5de401a19" gracePeriod=30 Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.103699 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.103800 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.103824 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx2s9\" (UniqueName: \"kubernetes.io/projected/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-kube-api-access-xx2s9\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.103866 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.103892 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-scripts\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.103912 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-config-data\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.111331 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86445c674d-k7fnl_e9d24a80-bd92-4752-8786-147975b15301/neutron-httpd/2.log" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.144923 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b895b5785-bf6w5"] Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.146676 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.186158 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-bf6w5"] Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.210950 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.211031 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-scripts\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.211064 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-config-data\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.211108 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72rx2\" (UniqueName: \"kubernetes.io/projected/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-kube-api-access-72rx2\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.211130 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.211175 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-dns-svc\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.211201 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.211230 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.211326 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.211365 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx2s9\" (UniqueName: \"kubernetes.io/projected/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-kube-api-access-xx2s9\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.211390 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-config\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.211425 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.211568 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.227797 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-scripts\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.236867 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.240019 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-config-data\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.252273 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx2s9\" (UniqueName: \"kubernetes.io/projected/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-kube-api-access-xx2s9\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.252935 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.309651 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.315620 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.317691 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72rx2\" (UniqueName: \"kubernetes.io/projected/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-kube-api-access-72rx2\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.317767 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.317805 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-dns-svc\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.317834 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.317904 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-config\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.317935 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.319012 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-dns-swift-storage-0\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.319763 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-dns-svc\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.342473 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.346477 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-ovsdbserver-sb\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.346543 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-ovsdbserver-nb\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.346894 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-config\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.376109 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.409036 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72rx2\" (UniqueName: \"kubernetes.io/projected/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-kube-api-access-72rx2\") pod \"dnsmasq-dns-b895b5785-bf6w5\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.435049 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.439330 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.439414 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-config-data\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.439436 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwh92\" (UniqueName: \"kubernetes.io/projected/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-kube-api-access-dwh92\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.439467 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-logs\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.439511 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-scripts\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.439619 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-config-data-custom\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.439655 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.494469 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.545509 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-scripts\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.545621 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-config-data-custom\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.545651 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.545679 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.545711 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-config-data\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.545728 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwh92\" (UniqueName: \"kubernetes.io/projected/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-kube-api-access-dwh92\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.545754 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-logs\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.546151 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-logs\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.558068 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.571823 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-scripts\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.575546 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-config-data\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.577417 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.587368 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-config-data-custom\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.642511 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwh92\" (UniqueName: \"kubernetes.io/projected/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-kube-api-access-dwh92\") pod \"cinder-api-0\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.666768 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.683382 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-9495fd7c-5kc55"] Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.704068 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.707430 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9495fd7c-5kc55"] Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.710194 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fkv4b" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.712806 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.723086 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.723402 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-84699c9d66-ghjfn" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.754216 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df97f54a-8ff1-4de9-9a88-80561f4aa819-config-data-custom\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.754317 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df97f54a-8ff1-4de9-9a88-80561f4aa819-combined-ca-bundle\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.754358 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649wt\" (UniqueName: \"kubernetes.io/projected/df97f54a-8ff1-4de9-9a88-80561f4aa819-kube-api-access-649wt\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.754379 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df97f54a-8ff1-4de9-9a88-80561f4aa819-logs\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.754405 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df97f54a-8ff1-4de9-9a88-80561f4aa819-config-data\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.771778 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-58cc9986b4-t7tcs" podUID="5fe017d9-f16b-465c-97a0-ebe4466006f0" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.779355 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5df7bc8868-6w74x"] Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.786980 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.793634 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.794361 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5df7bc8868-6w74x"] Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.856696 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df97f54a-8ff1-4de9-9a88-80561f4aa819-combined-ca-bundle\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.856766 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-649wt\" (UniqueName: \"kubernetes.io/projected/df97f54a-8ff1-4de9-9a88-80561f4aa819-kube-api-access-649wt\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.856786 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df97f54a-8ff1-4de9-9a88-80561f4aa819-logs\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.856813 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df97f54a-8ff1-4de9-9a88-80561f4aa819-config-data\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.856856 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93487b6e-adae-4467-bc6f-022380ad3028-config-data-custom\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.856879 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93487b6e-adae-4467-bc6f-022380ad3028-combined-ca-bundle\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.856899 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93487b6e-adae-4467-bc6f-022380ad3028-logs\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.856946 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgqnw\" (UniqueName: \"kubernetes.io/projected/93487b6e-adae-4467-bc6f-022380ad3028-kube-api-access-fgqnw\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.856972 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93487b6e-adae-4467-bc6f-022380ad3028-config-data\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.857005 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df97f54a-8ff1-4de9-9a88-80561f4aa819-config-data-custom\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.859683 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df97f54a-8ff1-4de9-9a88-80561f4aa819-logs\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.865296 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df97f54a-8ff1-4de9-9a88-80561f4aa819-combined-ca-bundle\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.874101 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df97f54a-8ff1-4de9-9a88-80561f4aa819-config-data-custom\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.875056 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df97f54a-8ff1-4de9-9a88-80561f4aa819-config-data\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.897798 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-649wt\" (UniqueName: \"kubernetes.io/projected/df97f54a-8ff1-4de9-9a88-80561f4aa819-kube-api-access-649wt\") pod \"barbican-worker-9495fd7c-5kc55\" (UID: \"df97f54a-8ff1-4de9-9a88-80561f4aa819\") " pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.939020 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-bf6w5"] Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.962129 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgqnw\" (UniqueName: \"kubernetes.io/projected/93487b6e-adae-4467-bc6f-022380ad3028-kube-api-access-fgqnw\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.962188 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93487b6e-adae-4467-bc6f-022380ad3028-config-data\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.962375 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93487b6e-adae-4467-bc6f-022380ad3028-config-data-custom\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.962402 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93487b6e-adae-4467-bc6f-022380ad3028-combined-ca-bundle\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.962440 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93487b6e-adae-4467-bc6f-022380ad3028-logs\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.962963 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93487b6e-adae-4467-bc6f-022380ad3028-logs\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.979427 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93487b6e-adae-4467-bc6f-022380ad3028-combined-ca-bundle\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.981202 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93487b6e-adae-4467-bc6f-022380ad3028-config-data\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.989405 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93487b6e-adae-4467-bc6f-022380ad3028-config-data-custom\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:00 crc kubenswrapper[4768]: I0223 18:51:00.999021 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgqnw\" (UniqueName: \"kubernetes.io/projected/93487b6e-adae-4467-bc6f-022380ad3028-kube-api-access-fgqnw\") pod \"barbican-keystone-listener-5df7bc8868-6w74x\" (UID: \"93487b6e-adae-4467-bc6f-022380ad3028\") " pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.042319 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6tz6q"] Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.044042 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.063930 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.065355 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwfmn\" (UniqueName: \"kubernetes.io/projected/6e097cb1-8802-4e80-b5c9-6469c7387e0b-kube-api-access-nwfmn\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.065489 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-config\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.065577 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.065694 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.065772 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.064815 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6tz6q"] Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.101512 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5fbfdc6854-l4dxh"] Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.105957 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.114518 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.146990 4768 generic.go:334] "Generic (PLEG): container finished" podID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerID="f21e5e9db4adf7f22536ac20d3fbeca07bba909acfbb65f17802d715aedf6e9d" exitCode=0 Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.147040 4768 generic.go:334] "Generic (PLEG): container finished" podID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerID="a1d60d686b6efc7feaff457befce0ee53193aaa2baddae35f0c7e0e5de401a19" exitCode=2 Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.147076 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40891100-89e6-4bd1-9ea0-8707548ffee8","Type":"ContainerDied","Data":"f21e5e9db4adf7f22536ac20d3fbeca07bba909acfbb65f17802d715aedf6e9d"} Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.147125 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40891100-89e6-4bd1-9ea0-8707548ffee8","Type":"ContainerDied","Data":"a1d60d686b6efc7feaff457befce0ee53193aaa2baddae35f0c7e0e5de401a19"} Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.148489 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fbfdc6854-l4dxh"] Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.167647 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.167691 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwfmn\" (UniqueName: \"kubernetes.io/projected/6e097cb1-8802-4e80-b5c9-6469c7387e0b-kube-api-access-nwfmn\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.167743 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-combined-ca-bundle\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.167769 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-config-data\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.167793 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-config\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.167821 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.167842 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-logs\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.167861 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.167879 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-config-data-custom\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.167897 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4428t\" (UniqueName: \"kubernetes.io/projected/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-kube-api-access-4428t\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.167915 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.169157 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.169175 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-config\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.169466 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.175830 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.176215 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.184183 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-9495fd7c-5kc55" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.201314 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwfmn\" (UniqueName: \"kubernetes.io/projected/6e097cb1-8802-4e80-b5c9-6469c7387e0b-kube-api-access-nwfmn\") pod \"dnsmasq-dns-5c9776ccc5-6tz6q\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.210165 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.271606 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-logs\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.271650 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-config-data-custom\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.271671 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4428t\" (UniqueName: \"kubernetes.io/projected/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-kube-api-access-4428t\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.284825 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-combined-ca-bundle\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.284910 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-config-data\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.292910 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-logs\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.312478 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4428t\" (UniqueName: \"kubernetes.io/projected/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-kube-api-access-4428t\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.318973 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-config-data-custom\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.327028 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-config-data\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.327069 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-combined-ca-bundle\") pod \"barbican-api-5fbfdc6854-l4dxh\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.412720 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.443555 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.501910 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-bf6w5"] Feb 23 18:51:01 crc kubenswrapper[4768]: W0223 18:51:01.564372 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb3bace6_17e3_4fa5_9bff_2243a1c6b11f.slice/crio-c7e7efb4cad9b3ee55d97a991de2b2bab9a690bfedf381c267651bd63d334b16 WatchSource:0}: Error finding container c7e7efb4cad9b3ee55d97a991de2b2bab9a690bfedf381c267651bd63d334b16: Status 404 returned error can't find the container with id c7e7efb4cad9b3ee55d97a991de2b2bab9a690bfedf381c267651bd63d334b16 Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.629646 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.638691 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 18:51:01 crc kubenswrapper[4768]: I0223 18:51:01.674090 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-9495fd7c-5kc55"] Feb 23 18:51:01 crc kubenswrapper[4768]: W0223 18:51:01.846099 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf97f54a_8ff1_4de9_9a88_80561f4aa819.slice/crio-99d20f7354e4825f12fb5a1a2b7942ae3624d1c1cebdc72df06ab37951b2d858 WatchSource:0}: Error finding container 99d20f7354e4825f12fb5a1a2b7942ae3624d1c1cebdc72df06ab37951b2d858: Status 404 returned error can't find the container with id 99d20f7354e4825f12fb5a1a2b7942ae3624d1c1cebdc72df06ab37951b2d858 Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.093151 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5df7bc8868-6w74x"] Feb 23 18:51:02 crc kubenswrapper[4768]: W0223 18:51:02.112005 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93487b6e_adae_4467_bc6f_022380ad3028.slice/crio-f4f8dd1296f4e9d842818122b3d931993ef4fd826724c28a7c7d2eedc34be5fe WatchSource:0}: Error finding container f4f8dd1296f4e9d842818122b3d931993ef4fd826724c28a7c7d2eedc34be5fe: Status 404 returned error can't find the container with id f4f8dd1296f4e9d842818122b3d931993ef4fd826724c28a7c7d2eedc34be5fe Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.188715 4768 generic.go:334] "Generic (PLEG): container finished" podID="fb3bace6-17e3-4fa5-9bff-2243a1c6b11f" containerID="b839577d9e5bc6f0ec710127aab00202d6c9fdb1eb9a2b77d3fcbee6d9a511fc" exitCode=0 Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.188936 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-bf6w5" event={"ID":"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f","Type":"ContainerDied","Data":"b839577d9e5bc6f0ec710127aab00202d6c9fdb1eb9a2b77d3fcbee6d9a511fc"} Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.189003 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-bf6w5" event={"ID":"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f","Type":"ContainerStarted","Data":"c7e7efb4cad9b3ee55d97a991de2b2bab9a690bfedf381c267651bd63d334b16"} Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.201580 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d8a712c2-737f-49cd-802c-8baeb2d5a0d1","Type":"ContainerStarted","Data":"297d59f9dc08a748aefe64e11b3ea2cf89252a451d186be8002d9ee196e8101a"} Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.203980 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" event={"ID":"93487b6e-adae-4467-bc6f-022380ad3028","Type":"ContainerStarted","Data":"f4f8dd1296f4e9d842818122b3d931993ef4fd826724c28a7c7d2eedc34be5fe"} Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.208287 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807","Type":"ContainerStarted","Data":"3bd0da21ce7173113245d0809e07a16565be004a3ac8064a834ce107a655a594"} Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.234325 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6tz6q"] Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.254434 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9495fd7c-5kc55" event={"ID":"df97f54a-8ff1-4de9-9a88-80561f4aa819","Type":"ContainerStarted","Data":"99d20f7354e4825f12fb5a1a2b7942ae3624d1c1cebdc72df06ab37951b2d858"} Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.375604 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5fbfdc6854-l4dxh"] Feb 23 18:51:02 crc kubenswrapper[4768]: W0223 18:51:02.577702 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1db9dd83_857d_446f_ae79_6a0d0a4bda0a.slice/crio-e98aa52a73d3e374c9c62bbe3b4398bbd7e9bf2da80f7b7b52de3eb4dad493f2 WatchSource:0}: Error finding container e98aa52a73d3e374c9c62bbe3b4398bbd7e9bf2da80f7b7b52de3eb4dad493f2: Status 404 returned error can't find the container with id e98aa52a73d3e374c9c62bbe3b4398bbd7e9bf2da80f7b7b52de3eb4dad493f2 Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.825172 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.914766 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.914810 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.915593 4768 scope.go:117] "RemoveContainer" containerID="8df7addf879faa1157ac93cad44dc2f5410a91d009a7a5a0ea5851cd81e98d8c" Feb 23 18:51:02 crc kubenswrapper[4768]: E0223 18:51:02.915813 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"neutron-httpd\" with CrashLoopBackOff: \"back-off 20s restarting failed container=neutron-httpd pod=neutron-86445c674d-k7fnl_openstack(e9d24a80-bd92-4752-8786-147975b15301)\"" pod="openstack/neutron-86445c674d-k7fnl" podUID="e9d24a80-bd92-4752-8786-147975b15301" Feb 23 18:51:02 crc kubenswrapper[4768]: I0223 18:51:02.921389 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-86445c674d-k7fnl" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-api" probeResult="failure" output="Get \"http://10.217.0.155:9696/\": dial tcp 10.217.0.155:9696: connect: connection refused" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.035146 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.149682 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-config\") pod \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.150948 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-dns-swift-storage-0\") pod \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.151123 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-dns-svc\") pod \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.151204 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-ovsdbserver-nb\") pod \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.151325 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-ovsdbserver-sb\") pod \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.151406 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72rx2\" (UniqueName: \"kubernetes.io/projected/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-kube-api-access-72rx2\") pod \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\" (UID: \"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f\") " Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.182795 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-kube-api-access-72rx2" (OuterVolumeSpecName: "kube-api-access-72rx2") pod "fb3bace6-17e3-4fa5-9bff-2243a1c6b11f" (UID: "fb3bace6-17e3-4fa5-9bff-2243a1c6b11f"). InnerVolumeSpecName "kube-api-access-72rx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.202701 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fb3bace6-17e3-4fa5-9bff-2243a1c6b11f" (UID: "fb3bace6-17e3-4fa5-9bff-2243a1c6b11f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.231344 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fb3bace6-17e3-4fa5-9bff-2243a1c6b11f" (UID: "fb3bace6-17e3-4fa5-9bff-2243a1c6b11f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.264387 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.264425 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.264436 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72rx2\" (UniqueName: \"kubernetes.io/projected/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-kube-api-access-72rx2\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.335991 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-config" (OuterVolumeSpecName: "config") pod "fb3bace6-17e3-4fa5-9bff-2243a1c6b11f" (UID: "fb3bace6-17e3-4fa5-9bff-2243a1c6b11f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.356539 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" event={"ID":"6e097cb1-8802-4e80-b5c9-6469c7387e0b","Type":"ContainerStarted","Data":"79511cfaff42a02897b8522f6b2bede368de25f2c70971ddaf91cb343443c6c3"} Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.369000 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.379507 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbfdc6854-l4dxh" event={"ID":"1db9dd83-857d-446f-ae79-6a0d0a4bda0a","Type":"ContainerStarted","Data":"e98aa52a73d3e374c9c62bbe3b4398bbd7e9bf2da80f7b7b52de3eb4dad493f2"} Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.394561 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b895b5785-bf6w5" event={"ID":"fb3bace6-17e3-4fa5-9bff-2243a1c6b11f","Type":"ContainerDied","Data":"c7e7efb4cad9b3ee55d97a991de2b2bab9a690bfedf381c267651bd63d334b16"} Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.394636 4768 scope.go:117] "RemoveContainer" containerID="b839577d9e5bc6f0ec710127aab00202d6c9fdb1eb9a2b77d3fcbee6d9a511fc" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.394773 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b895b5785-bf6w5" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.478460 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fb3bace6-17e3-4fa5-9bff-2243a1c6b11f" (UID: "fb3bace6-17e3-4fa5-9bff-2243a1c6b11f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.573426 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.760994 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fb3bace6-17e3-4fa5-9bff-2243a1c6b11f" (UID: "fb3bace6-17e3-4fa5-9bff-2243a1c6b11f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:03 crc kubenswrapper[4768]: I0223 18:51:03.777850 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:04 crc kubenswrapper[4768]: I0223 18:51:04.095935 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-bf6w5"] Feb 23 18:51:04 crc kubenswrapper[4768]: I0223 18:51:04.112563 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b895b5785-bf6w5"] Feb 23 18:51:04 crc kubenswrapper[4768]: I0223 18:51:04.417135 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbfdc6854-l4dxh" event={"ID":"1db9dd83-857d-446f-ae79-6a0d0a4bda0a","Type":"ContainerStarted","Data":"a9b0965d2ddf697690d6f8212c92514cba2d4aa4404268d1e5361473cfe5275c"} Feb 23 18:51:04 crc kubenswrapper[4768]: I0223 18:51:04.449101 4768 generic.go:334] "Generic (PLEG): container finished" podID="6e097cb1-8802-4e80-b5c9-6469c7387e0b" containerID="0ad3662694b2461246649c15915809702676b5b698bf913aed43a626ce92365f" exitCode=0 Feb 23 18:51:04 crc kubenswrapper[4768]: I0223 18:51:04.450140 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" event={"ID":"6e097cb1-8802-4e80-b5c9-6469c7387e0b","Type":"ContainerDied","Data":"0ad3662694b2461246649c15915809702676b5b698bf913aed43a626ce92365f"} Feb 23 18:51:04 crc kubenswrapper[4768]: I0223 18:51:04.458009 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d8a712c2-737f-49cd-802c-8baeb2d5a0d1","Type":"ContainerStarted","Data":"a0c40cd0f005020cade0d6c2228e86dfd735bf4d260f3b7bc1b9951bb5df6015"} Feb 23 18:51:04 crc kubenswrapper[4768]: I0223 18:51:04.461688 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807","Type":"ContainerStarted","Data":"f117b55c9ec30d729db06483f0c2e3f483a123d26d599395e01893583268ac59"} Feb 23 18:51:04 crc kubenswrapper[4768]: I0223 18:51:04.665581 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:51:04 crc kubenswrapper[4768]: I0223 18:51:04.723432 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:51:04 crc kubenswrapper[4768]: I0223 18:51:04.901555 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kv44j"] Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.333865 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb3bace6-17e3-4fa5-9bff-2243a1c6b11f" path="/var/lib/kubelet/pods/fb3bace6-17e3-4fa5-9bff-2243a1c6b11f/volumes" Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.477281 4768 generic.go:334] "Generic (PLEG): container finished" podID="6b4f1e75-6a30-4789-9b7f-85e92aed1581" containerID="992edccbbd4dbced78c9aa11bebdb96c2b21132c6ee9a7b8bdb85168a1de4b46" exitCode=137 Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.477337 4768 generic.go:334] "Generic (PLEG): container finished" podID="6b4f1e75-6a30-4789-9b7f-85e92aed1581" containerID="326211414bab06de6e3e320987bf4657737969405d6bbe387618b2d5d5b871a3" exitCode=137 Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.477367 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-845b48bb89-v6rjx" event={"ID":"6b4f1e75-6a30-4789-9b7f-85e92aed1581","Type":"ContainerDied","Data":"992edccbbd4dbced78c9aa11bebdb96c2b21132c6ee9a7b8bdb85168a1de4b46"} Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.477421 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-845b48bb89-v6rjx" event={"ID":"6b4f1e75-6a30-4789-9b7f-85e92aed1581","Type":"ContainerDied","Data":"326211414bab06de6e3e320987bf4657737969405d6bbe387618b2d5d5b871a3"} Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.484240 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbfdc6854-l4dxh" event={"ID":"1db9dd83-857d-446f-ae79-6a0d0a4bda0a","Type":"ContainerStarted","Data":"7c96206d9b25176eff8047bab2de7452e702c0f1ff9b66bfd26c118c0357ad7e"} Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.484517 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.484585 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.486829 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d8a712c2-737f-49cd-802c-8baeb2d5a0d1","Type":"ContainerStarted","Data":"20e9b5be445568f0da14b77d6bdbf202d7fe84c03e39de9c6b5e9a3cdfe1737d"} Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.487072 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d8a712c2-737f-49cd-802c-8baeb2d5a0d1" containerName="cinder-api-log" containerID="cri-o://a0c40cd0f005020cade0d6c2228e86dfd735bf4d260f3b7bc1b9951bb5df6015" gracePeriod=30 Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.487199 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.487268 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d8a712c2-737f-49cd-802c-8baeb2d5a0d1" containerName="cinder-api" containerID="cri-o://20e9b5be445568f0da14b77d6bdbf202d7fe84c03e39de9c6b5e9a3cdfe1737d" gracePeriod=30 Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.504857 4768 generic.go:334] "Generic (PLEG): container finished" podID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerID="4f9201dd6adf2ed18bc2268e671843218e9442058f4722cbdeef4c484ce86cf3" exitCode=0 Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.505434 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40891100-89e6-4bd1-9ea0-8707548ffee8","Type":"ContainerDied","Data":"4f9201dd6adf2ed18bc2268e671843218e9442058f4722cbdeef4c484ce86cf3"} Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.516077 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5fbfdc6854-l4dxh" podStartSLOduration=4.516051304 podStartE2EDuration="4.516051304s" podCreationTimestamp="2026-02-23 18:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:05.505848115 +0000 UTC m=+1060.896333915" watchObservedRunningTime="2026-02-23 18:51:05.516051304 +0000 UTC m=+1060.906537104" Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.521066 4768 generic.go:334] "Generic (PLEG): container finished" podID="7f393bd1-497e-4426-be4b-06f4c65f03f5" containerID="bfbd1b0852eb637126d18d1b2134229fd82aed1aa4505c42863b581f9d46b36b" exitCode=137 Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.521099 4768 generic.go:334] "Generic (PLEG): container finished" podID="7f393bd1-497e-4426-be4b-06f4c65f03f5" containerID="cba4c852570c7fb6a1f0f05588013260695c7b051a725fad0743d6f4e1f6dab8" exitCode=137 Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.521494 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67699f99c7-5rzsw" event={"ID":"7f393bd1-497e-4426-be4b-06f4c65f03f5","Type":"ContainerDied","Data":"bfbd1b0852eb637126d18d1b2134229fd82aed1aa4505c42863b581f9d46b36b"} Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.521561 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67699f99c7-5rzsw" event={"ID":"7f393bd1-497e-4426-be4b-06f4c65f03f5","Type":"ContainerDied","Data":"cba4c852570c7fb6a1f0f05588013260695c7b051a725fad0743d6f4e1f6dab8"} Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.542318 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.542300055 podStartE2EDuration="5.542300055s" podCreationTimestamp="2026-02-23 18:51:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:05.532761633 +0000 UTC m=+1060.923247463" watchObservedRunningTime="2026-02-23 18:51:05.542300055 +0000 UTC m=+1060.932785855" Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.551929 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.662720 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86445c674d-k7fnl"] Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.663111 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86445c674d-k7fnl" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-api" containerID="cri-o://4fdcd00c6f8050d41022065c8ac3d5e39db2b0c4c92ee63384055d43d993f166" gracePeriod=30 Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.980552 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-546cfc7689-gsp5x"] Feb 23 18:51:05 crc kubenswrapper[4768]: E0223 18:51:05.981977 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb3bace6-17e3-4fa5-9bff-2243a1c6b11f" containerName="init" Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.982047 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb3bace6-17e3-4fa5-9bff-2243a1c6b11f" containerName="init" Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.983805 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb3bace6-17e3-4fa5-9bff-2243a1c6b11f" containerName="init" Feb 23 18:51:05 crc kubenswrapper[4768]: I0223 18:51:05.986228 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:05.994554 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-546cfc7689-gsp5x"] Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.155749 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-public-tls-certs\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.155823 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-internal-tls-certs\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.155857 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-httpd-config\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.155899 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-config\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.155928 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-ovndb-tls-certs\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.155956 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-combined-ca-bundle\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.155984 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2zfv\" (UniqueName: \"kubernetes.io/projected/e861983f-c70e-47f3-936d-202ae74a1144-kube-api-access-k2zfv\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.261230 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-internal-tls-certs\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.261625 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-httpd-config\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.261665 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-config\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.261693 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-ovndb-tls-certs\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.261717 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-combined-ca-bundle\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.261741 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2zfv\" (UniqueName: \"kubernetes.io/projected/e861983f-c70e-47f3-936d-202ae74a1144-kube-api-access-k2zfv\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.261847 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-public-tls-certs\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.268843 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.364148 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-scripts\") pod \"40891100-89e6-4bd1-9ea0-8707548ffee8\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.364391 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40891100-89e6-4bd1-9ea0-8707548ffee8-log-httpd\") pod \"40891100-89e6-4bd1-9ea0-8707548ffee8\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.364433 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-config-data\") pod \"40891100-89e6-4bd1-9ea0-8707548ffee8\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.364450 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40891100-89e6-4bd1-9ea0-8707548ffee8-run-httpd\") pod \"40891100-89e6-4bd1-9ea0-8707548ffee8\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.364549 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj6rd\" (UniqueName: \"kubernetes.io/projected/40891100-89e6-4bd1-9ea0-8707548ffee8-kube-api-access-hj6rd\") pod \"40891100-89e6-4bd1-9ea0-8707548ffee8\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.364609 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-combined-ca-bundle\") pod \"40891100-89e6-4bd1-9ea0-8707548ffee8\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.364922 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-sg-core-conf-yaml\") pod \"40891100-89e6-4bd1-9ea0-8707548ffee8\" (UID: \"40891100-89e6-4bd1-9ea0-8707548ffee8\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.367709 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-public-tls-certs\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.375160 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40891100-89e6-4bd1-9ea0-8707548ffee8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "40891100-89e6-4bd1-9ea0-8707548ffee8" (UID: "40891100-89e6-4bd1-9ea0-8707548ffee8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.375403 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-internal-tls-certs\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.376625 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40891100-89e6-4bd1-9ea0-8707548ffee8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "40891100-89e6-4bd1-9ea0-8707548ffee8" (UID: "40891100-89e6-4bd1-9ea0-8707548ffee8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.387489 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40891100-89e6-4bd1-9ea0-8707548ffee8-kube-api-access-hj6rd" (OuterVolumeSpecName: "kube-api-access-hj6rd") pod "40891100-89e6-4bd1-9ea0-8707548ffee8" (UID: "40891100-89e6-4bd1-9ea0-8707548ffee8"). InnerVolumeSpecName "kube-api-access-hj6rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.390016 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-combined-ca-bundle\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.390653 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-ovndb-tls-certs\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.419200 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2zfv\" (UniqueName: \"kubernetes.io/projected/e861983f-c70e-47f3-936d-202ae74a1144-kube-api-access-k2zfv\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.419531 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-config\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.419811 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e861983f-c70e-47f3-936d-202ae74a1144-httpd-config\") pod \"neutron-546cfc7689-gsp5x\" (UID: \"e861983f-c70e-47f3-936d-202ae74a1144\") " pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.468623 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40891100-89e6-4bd1-9ea0-8707548ffee8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.468679 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40891100-89e6-4bd1-9ea0-8707548ffee8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.468689 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj6rd\" (UniqueName: \"kubernetes.io/projected/40891100-89e6-4bd1-9ea0-8707548ffee8-kube-api-access-hj6rd\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.473350 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-scripts" (OuterVolumeSpecName: "scripts") pod "40891100-89e6-4bd1-9ea0-8707548ffee8" (UID: "40891100-89e6-4bd1-9ea0-8707548ffee8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.503824 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.558662 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40891100-89e6-4bd1-9ea0-8707548ffee8","Type":"ContainerDied","Data":"47ca1535d18dd1035ad658330871c79c9974ed20a1312713dd603e1175978f15"} Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.558723 4768 scope.go:117] "RemoveContainer" containerID="f21e5e9db4adf7f22536ac20d3fbeca07bba909acfbb65f17802d715aedf6e9d" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.559121 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.570580 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.577901 4768 generic.go:334] "Generic (PLEG): container finished" podID="5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" containerID="c569cc9ba619df1b2cada5105fae786aba2b94fd34ece8f1c107172ce3fc5e44" exitCode=137 Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.577942 4768 generic.go:334] "Generic (PLEG): container finished" podID="5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" containerID="f9e3461a2c97be4605ebd45790a637f855c5964ae39517275b37c79e5e416163" exitCode=137 Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.577937 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bc449878f-7drht" event={"ID":"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a","Type":"ContainerDied","Data":"c569cc9ba619df1b2cada5105fae786aba2b94fd34ece8f1c107172ce3fc5e44"} Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.578015 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bc449878f-7drht" event={"ID":"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a","Type":"ContainerDied","Data":"f9e3461a2c97be4605ebd45790a637f855c5964ae39517275b37c79e5e416163"} Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.580978 4768 generic.go:334] "Generic (PLEG): container finished" podID="d8a712c2-737f-49cd-802c-8baeb2d5a0d1" containerID="20e9b5be445568f0da14b77d6bdbf202d7fe84c03e39de9c6b5e9a3cdfe1737d" exitCode=0 Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.581004 4768 generic.go:334] "Generic (PLEG): container finished" podID="d8a712c2-737f-49cd-802c-8baeb2d5a0d1" containerID="a0c40cd0f005020cade0d6c2228e86dfd735bf4d260f3b7bc1b9951bb5df6015" exitCode=143 Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.582849 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d8a712c2-737f-49cd-802c-8baeb2d5a0d1","Type":"ContainerDied","Data":"20e9b5be445568f0da14b77d6bdbf202d7fe84c03e39de9c6b5e9a3cdfe1737d"} Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.582886 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d8a712c2-737f-49cd-802c-8baeb2d5a0d1","Type":"ContainerDied","Data":"a0c40cd0f005020cade0d6c2228e86dfd735bf4d260f3b7bc1b9951bb5df6015"} Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.583563 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kv44j" podUID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" containerName="registry-server" containerID="cri-o://7e27f793c5fa041075e84feb23c85a4fc819ef5d44075425346595d23c8eed1d" gracePeriod=2 Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.650431 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.651021 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.666962 4768 scope.go:117] "RemoveContainer" containerID="a1d60d686b6efc7feaff457befce0ee53193aaa2baddae35f0c7e0e5de401a19" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.670116 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "40891100-89e6-4bd1-9ea0-8707548ffee8" (UID: "40891100-89e6-4bd1-9ea0-8707548ffee8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.678765 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.703001 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40891100-89e6-4bd1-9ea0-8707548ffee8" (UID: "40891100-89e6-4bd1-9ea0-8707548ffee8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.750669 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.795993 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b4f1e75-6a30-4789-9b7f-85e92aed1581-horizon-secret-key\") pod \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796074 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwh92\" (UniqueName: \"kubernetes.io/projected/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-kube-api-access-dwh92\") pod \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796117 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-scripts\") pod \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796145 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b4f1e75-6a30-4789-9b7f-85e92aed1581-logs\") pod \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796167 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-logs\") pod \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796193 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-etc-machine-id\") pod \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796222 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-config-data-custom\") pod \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796275 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhhsw\" (UniqueName: \"kubernetes.io/projected/6b4f1e75-6a30-4789-9b7f-85e92aed1581-kube-api-access-jhhsw\") pod \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796355 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f393bd1-497e-4426-be4b-06f4c65f03f5-scripts\") pod \"7f393bd1-497e-4426-be4b-06f4c65f03f5\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796403 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f393bd1-497e-4426-be4b-06f4c65f03f5-logs\") pod \"7f393bd1-497e-4426-be4b-06f4c65f03f5\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796433 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b4f1e75-6a30-4789-9b7f-85e92aed1581-scripts\") pod \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796462 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b4f1e75-6a30-4789-9b7f-85e92aed1581-config-data\") pod \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\" (UID: \"6b4f1e75-6a30-4789-9b7f-85e92aed1581\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796479 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgxmm\" (UniqueName: \"kubernetes.io/projected/7f393bd1-497e-4426-be4b-06f4c65f03f5-kube-api-access-sgxmm\") pod \"7f393bd1-497e-4426-be4b-06f4c65f03f5\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796515 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-config-data\") pod \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.796548 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-combined-ca-bundle\") pod \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\" (UID: \"d8a712c2-737f-49cd-802c-8baeb2d5a0d1\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.797408 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.798939 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d8a712c2-737f-49cd-802c-8baeb2d5a0d1" (UID: "d8a712c2-737f-49cd-802c-8baeb2d5a0d1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.813201 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-logs" (OuterVolumeSpecName: "logs") pod "d8a712c2-737f-49cd-802c-8baeb2d5a0d1" (UID: "d8a712c2-737f-49cd-802c-8baeb2d5a0d1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.814176 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f393bd1-497e-4426-be4b-06f4c65f03f5-logs" (OuterVolumeSpecName: "logs") pod "7f393bd1-497e-4426-be4b-06f4c65f03f5" (UID: "7f393bd1-497e-4426-be4b-06f4c65f03f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.820447 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b4f1e75-6a30-4789-9b7f-85e92aed1581-logs" (OuterVolumeSpecName: "logs") pod "6b4f1e75-6a30-4789-9b7f-85e92aed1581" (UID: "6b4f1e75-6a30-4789-9b7f-85e92aed1581"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.829665 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-scripts" (OuterVolumeSpecName: "scripts") pod "d8a712c2-737f-49cd-802c-8baeb2d5a0d1" (UID: "d8a712c2-737f-49cd-802c-8baeb2d5a0d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.838835 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b4f1e75-6a30-4789-9b7f-85e92aed1581-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "6b4f1e75-6a30-4789-9b7f-85e92aed1581" (UID: "6b4f1e75-6a30-4789-9b7f-85e92aed1581"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.839736 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b4f1e75-6a30-4789-9b7f-85e92aed1581-kube-api-access-jhhsw" (OuterVolumeSpecName: "kube-api-access-jhhsw") pod "6b4f1e75-6a30-4789-9b7f-85e92aed1581" (UID: "6b4f1e75-6a30-4789-9b7f-85e92aed1581"). InnerVolumeSpecName "kube-api-access-jhhsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.853685 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d8a712c2-737f-49cd-802c-8baeb2d5a0d1" (UID: "d8a712c2-737f-49cd-802c-8baeb2d5a0d1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.887402 4768 scope.go:117] "RemoveContainer" containerID="4f9201dd6adf2ed18bc2268e671843218e9442058f4722cbdeef4c484ce86cf3" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.926688 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-kube-api-access-dwh92" (OuterVolumeSpecName: "kube-api-access-dwh92") pod "d8a712c2-737f-49cd-802c-8baeb2d5a0d1" (UID: "d8a712c2-737f-49cd-802c-8baeb2d5a0d1"). InnerVolumeSpecName "kube-api-access-dwh92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.928058 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f393bd1-497e-4426-be4b-06f4c65f03f5-kube-api-access-sgxmm" (OuterVolumeSpecName: "kube-api-access-sgxmm") pod "7f393bd1-497e-4426-be4b-06f4c65f03f5" (UID: "7f393bd1-497e-4426-be4b-06f4c65f03f5"). InnerVolumeSpecName "kube-api-access-sgxmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.928386 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f393bd1-497e-4426-be4b-06f4c65f03f5-config-data\") pod \"7f393bd1-497e-4426-be4b-06f4c65f03f5\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.928445 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f393bd1-497e-4426-be4b-06f4c65f03f5-horizon-secret-key\") pod \"7f393bd1-497e-4426-be4b-06f4c65f03f5\" (UID: \"7f393bd1-497e-4426-be4b-06f4c65f03f5\") " Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.934537 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f393bd1-497e-4426-be4b-06f4c65f03f5-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.934575 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgxmm\" (UniqueName: \"kubernetes.io/projected/7f393bd1-497e-4426-be4b-06f4c65f03f5-kube-api-access-sgxmm\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.934594 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6b4f1e75-6a30-4789-9b7f-85e92aed1581-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.934607 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwh92\" (UniqueName: \"kubernetes.io/projected/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-kube-api-access-dwh92\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.934619 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.934630 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b4f1e75-6a30-4789-9b7f-85e92aed1581-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.934641 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.934653 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.934664 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.934673 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhhsw\" (UniqueName: \"kubernetes.io/projected/6b4f1e75-6a30-4789-9b7f-85e92aed1581-kube-api-access-jhhsw\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.938439 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f393bd1-497e-4426-be4b-06f4c65f03f5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7f393bd1-497e-4426-be4b-06f4c65f03f5" (UID: "7f393bd1-497e-4426-be4b-06f4c65f03f5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.941566 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7bc688ffdb-gftft"] Feb 23 18:51:06 crc kubenswrapper[4768]: E0223 18:51:06.942018 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f393bd1-497e-4426-be4b-06f4c65f03f5" containerName="horizon" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.942084 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f393bd1-497e-4426-be4b-06f4c65f03f5" containerName="horizon" Feb 23 18:51:06 crc kubenswrapper[4768]: E0223 18:51:06.942151 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerName="proxy-httpd" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.942205 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerName="proxy-httpd" Feb 23 18:51:06 crc kubenswrapper[4768]: E0223 18:51:06.942277 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerName="sg-core" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.942336 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerName="sg-core" Feb 23 18:51:06 crc kubenswrapper[4768]: E0223 18:51:06.942398 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f393bd1-497e-4426-be4b-06f4c65f03f5" containerName="horizon-log" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.942454 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f393bd1-497e-4426-be4b-06f4c65f03f5" containerName="horizon-log" Feb 23 18:51:06 crc kubenswrapper[4768]: E0223 18:51:06.942510 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a712c2-737f-49cd-802c-8baeb2d5a0d1" containerName="cinder-api" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.942561 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a712c2-737f-49cd-802c-8baeb2d5a0d1" containerName="cinder-api" Feb 23 18:51:06 crc kubenswrapper[4768]: E0223 18:51:06.942623 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b4f1e75-6a30-4789-9b7f-85e92aed1581" containerName="horizon" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.942670 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b4f1e75-6a30-4789-9b7f-85e92aed1581" containerName="horizon" Feb 23 18:51:06 crc kubenswrapper[4768]: E0223 18:51:06.942730 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a712c2-737f-49cd-802c-8baeb2d5a0d1" containerName="cinder-api-log" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.942783 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a712c2-737f-49cd-802c-8baeb2d5a0d1" containerName="cinder-api-log" Feb 23 18:51:06 crc kubenswrapper[4768]: E0223 18:51:06.942833 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerName="ceilometer-notification-agent" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.942879 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerName="ceilometer-notification-agent" Feb 23 18:51:06 crc kubenswrapper[4768]: E0223 18:51:06.942931 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b4f1e75-6a30-4789-9b7f-85e92aed1581" containerName="horizon-log" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.942979 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b4f1e75-6a30-4789-9b7f-85e92aed1581" containerName="horizon-log" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.943206 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerName="proxy-httpd" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.943290 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b4f1e75-6a30-4789-9b7f-85e92aed1581" containerName="horizon-log" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.943353 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerName="sg-core" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.957683 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a712c2-737f-49cd-802c-8baeb2d5a0d1" containerName="cinder-api" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.957805 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b4f1e75-6a30-4789-9b7f-85e92aed1581" containerName="horizon" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.957870 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f393bd1-497e-4426-be4b-06f4c65f03f5" containerName="horizon" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.957927 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f393bd1-497e-4426-be4b-06f4c65f03f5" containerName="horizon-log" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.957989 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" containerName="ceilometer-notification-agent" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.958042 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a712c2-737f-49cd-802c-8baeb2d5a0d1" containerName="cinder-api-log" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.959445 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.966420 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bc688ffdb-gftft"] Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.979761 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b4f1e75-6a30-4789-9b7f-85e92aed1581-scripts" (OuterVolumeSpecName: "scripts") pod "6b4f1e75-6a30-4789-9b7f-85e92aed1581" (UID: "6b4f1e75-6a30-4789-9b7f-85e92aed1581"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.980957 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.981422 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 23 18:51:06 crc kubenswrapper[4768]: I0223 18:51:06.987849 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f393bd1-497e-4426-be4b-06f4c65f03f5-config-data" (OuterVolumeSpecName: "config-data") pod "7f393bd1-497e-4426-be4b-06f4c65f03f5" (UID: "7f393bd1-497e-4426-be4b-06f4c65f03f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.003111 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b4f1e75-6a30-4789-9b7f-85e92aed1581-config-data" (OuterVolumeSpecName: "config-data") pod "6b4f1e75-6a30-4789-9b7f-85e92aed1581" (UID: "6b4f1e75-6a30-4789-9b7f-85e92aed1581"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.012578 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8a712c2-737f-49cd-802c-8baeb2d5a0d1" (UID: "d8a712c2-737f-49cd-802c-8baeb2d5a0d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.023322 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-config-data" (OuterVolumeSpecName: "config-data") pod "40891100-89e6-4bd1-9ea0-8707548ffee8" (UID: "40891100-89e6-4bd1-9ea0-8707548ffee8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.039102 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-config-data\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.039289 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-combined-ca-bundle\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.039333 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-internal-tls-certs\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.039372 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-public-tls-certs\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.039401 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-config-data-custom\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.039481 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ckjp\" (UniqueName: \"kubernetes.io/projected/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-kube-api-access-6ckjp\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.039515 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-logs\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.039585 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.039600 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7f393bd1-497e-4426-be4b-06f4c65f03f5-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.039610 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7f393bd1-497e-4426-be4b-06f4c65f03f5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.039621 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40891100-89e6-4bd1-9ea0-8707548ffee8-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.039631 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6b4f1e75-6a30-4789-9b7f-85e92aed1581-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.039645 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b4f1e75-6a30-4789-9b7f-85e92aed1581-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.052379 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-config-data" (OuterVolumeSpecName: "config-data") pod "d8a712c2-737f-49cd-802c-8baeb2d5a0d1" (UID: "d8a712c2-737f-49cd-802c-8baeb2d5a0d1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.056694 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f393bd1-497e-4426-be4b-06f4c65f03f5-scripts" (OuterVolumeSpecName: "scripts") pod "7f393bd1-497e-4426-be4b-06f4c65f03f5" (UID: "7f393bd1-497e-4426-be4b-06f4c65f03f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.141785 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-config-data\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.142072 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-combined-ca-bundle\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.142193 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-internal-tls-certs\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.142320 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-public-tls-certs\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.142421 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-config-data-custom\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.142533 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ckjp\" (UniqueName: \"kubernetes.io/projected/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-kube-api-access-6ckjp\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.142614 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-logs\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.142728 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7f393bd1-497e-4426-be4b-06f4c65f03f5-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.143581 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8a712c2-737f-49cd-802c-8baeb2d5a0d1-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.143984 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-logs\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.147964 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-combined-ca-bundle\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.152838 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-internal-tls-certs\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.158494 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-public-tls-certs\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.158883 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-config-data\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.161500 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-config-data-custom\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.176095 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ckjp\" (UniqueName: \"kubernetes.io/projected/2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44-kube-api-access-6ckjp\") pod \"barbican-api-7bc688ffdb-gftft\" (UID: \"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44\") " pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.299554 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.308678 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bc449878f-7drht" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.369091 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.434002 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.449583 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-utilities\") pod \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\" (UID: \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\") " Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.449703 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk9c4\" (UniqueName: \"kubernetes.io/projected/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-kube-api-access-wk9c4\") pod \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\" (UID: \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\") " Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.449774 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-scripts\") pod \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.449802 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-catalog-content\") pod \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\" (UID: \"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd\") " Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.449943 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5tfm\" (UniqueName: \"kubernetes.io/projected/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-kube-api-access-k5tfm\") pod \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.450008 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-config-data\") pod \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.450126 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-logs\") pod \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.450206 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-horizon-secret-key\") pod \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\" (UID: \"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a\") " Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.450962 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.452586 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-utilities" (OuterVolumeSpecName: "utilities") pod "ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" (UID: "ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.454698 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-logs" (OuterVolumeSpecName: "logs") pod "5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" (UID: "5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.461602 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-kube-api-access-wk9c4" (OuterVolumeSpecName: "kube-api-access-wk9c4") pod "ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" (UID: "ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd"). InnerVolumeSpecName "kube-api-access-wk9c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.463324 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.463512 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" (UID: "5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: E0223 18:51:07.463837 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" containerName="horizon-log" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.463852 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" containerName="horizon-log" Feb 23 18:51:07 crc kubenswrapper[4768]: E0223 18:51:07.463864 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" containerName="horizon" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.463870 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" containerName="horizon" Feb 23 18:51:07 crc kubenswrapper[4768]: E0223 18:51:07.463892 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" containerName="extract-utilities" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.463900 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" containerName="extract-utilities" Feb 23 18:51:07 crc kubenswrapper[4768]: E0223 18:51:07.463910 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" containerName="registry-server" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.463920 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" containerName="registry-server" Feb 23 18:51:07 crc kubenswrapper[4768]: E0223 18:51:07.463929 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" containerName="extract-content" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.463935 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" containerName="extract-content" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.464127 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" containerName="registry-server" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.464151 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" containerName="horizon" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.464171 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" containerName="horizon-log" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.464479 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-kube-api-access-k5tfm" (OuterVolumeSpecName: "kube-api-access-k5tfm") pod "5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" (UID: "5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a"). InnerVolumeSpecName "kube-api-access-k5tfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.466108 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.472039 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.472808 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.472839 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.554343 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-config-data\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.554451 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.554506 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5h4j\" (UniqueName: \"kubernetes.io/projected/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-kube-api-access-j5h4j\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.554538 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-run-httpd\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.554625 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.554767 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-scripts\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.554788 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-log-httpd\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.554859 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5tfm\" (UniqueName: \"kubernetes.io/projected/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-kube-api-access-k5tfm\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.554871 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.554881 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.554891 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.554905 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk9c4\" (UniqueName: \"kubernetes.io/projected/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-kube-api-access-wk9c4\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.555955 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-scripts" (OuterVolumeSpecName: "scripts") pod "5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" (UID: "5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.556570 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-config-data" (OuterVolumeSpecName: "config-data") pod "5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" (UID: "5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.580759 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-546cfc7689-gsp5x"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.634238 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-546cfc7689-gsp5x" event={"ID":"e861983f-c70e-47f3-936d-202ae74a1144","Type":"ContainerStarted","Data":"3825c31ef4cab42fab5838826f1678bd31cd15c7a6c3779c2a3e59fac4565b39"} Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.635042 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" (UID: "ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.648892 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807","Type":"ContainerStarted","Data":"15450062cce2e3be46045824db3438576669a2f97ac60f2d5cd0667df24f7868"} Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.658427 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-scripts\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.658477 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-log-httpd\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.658502 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-config-data\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.658541 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.658573 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5h4j\" (UniqueName: \"kubernetes.io/projected/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-kube-api-access-j5h4j\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.658594 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-run-httpd\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.658641 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.658703 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9495fd7c-5kc55" event={"ID":"df97f54a-8ff1-4de9-9a88-80561f4aa819","Type":"ContainerStarted","Data":"5c1285e7218ae73ce1d4f625eea40a55a84ac2de9368f22c4b84ab1cc0abbf61"} Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.658751 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-9495fd7c-5kc55" event={"ID":"df97f54a-8ff1-4de9-9a88-80561f4aa819","Type":"ContainerStarted","Data":"c1e244d907dac5edc4308e71043542dbe669f28baa0412cd75b8c2c89b612c6c"} Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.658718 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.658792 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.658810 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.662489 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.663075 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-run-httpd\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.665052 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-log-httpd\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.665095 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-config-data\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.668173 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-scripts\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.672433 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.680071 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.771590358 podStartE2EDuration="7.680052059s" podCreationTimestamp="2026-02-23 18:51:00 +0000 UTC" firstStartedPulling="2026-02-23 18:51:01.81425863 +0000 UTC m=+1057.204744430" lastFinishedPulling="2026-02-23 18:51:02.722720331 +0000 UTC m=+1058.113206131" observedRunningTime="2026-02-23 18:51:07.672564544 +0000 UTC m=+1063.063050334" watchObservedRunningTime="2026-02-23 18:51:07.680052059 +0000 UTC m=+1063.070537849" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.686587 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-845b48bb89-v6rjx" event={"ID":"6b4f1e75-6a30-4789-9b7f-85e92aed1581","Type":"ContainerDied","Data":"f288ad0c307b12b39fb34e061b6ce6641326600c98bcc27745b27db264eacce4"} Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.686641 4768 scope.go:117] "RemoveContainer" containerID="992edccbbd4dbced78c9aa11bebdb96c2b21132c6ee9a7b8bdb85168a1de4b46" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.686795 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-845b48bb89-v6rjx" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.691331 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5h4j\" (UniqueName: \"kubernetes.io/projected/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-kube-api-access-j5h4j\") pod \"ceilometer-0\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.701962 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" event={"ID":"6e097cb1-8802-4e80-b5c9-6469c7387e0b","Type":"ContainerStarted","Data":"9b702c8a2f49f152355850a0d01baa5a20f0166d677b2173f439d35a0566a116"} Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.703456 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.711713 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-9495fd7c-5kc55" podStartSLOduration=3.687626227 podStartE2EDuration="7.711687346s" podCreationTimestamp="2026-02-23 18:51:00 +0000 UTC" firstStartedPulling="2026-02-23 18:51:01.871584941 +0000 UTC m=+1057.262070741" lastFinishedPulling="2026-02-23 18:51:05.89564606 +0000 UTC m=+1061.286131860" observedRunningTime="2026-02-23 18:51:07.695829901 +0000 UTC m=+1063.086315711" watchObservedRunningTime="2026-02-23 18:51:07.711687346 +0000 UTC m=+1063.102173146" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.727142 4768 generic.go:334] "Generic (PLEG): container finished" podID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" containerID="7e27f793c5fa041075e84feb23c85a4fc819ef5d44075425346595d23c8eed1d" exitCode=0 Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.727272 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv44j" event={"ID":"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd","Type":"ContainerDied","Data":"7e27f793c5fa041075e84feb23c85a4fc819ef5d44075425346595d23c8eed1d"} Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.727303 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kv44j" event={"ID":"ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd","Type":"ContainerDied","Data":"3e688579097fba220fc0de064efffb49702d4886478ee2330bd09b50e0a01a86"} Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.727371 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kv44j" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.748790 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-845b48bb89-v6rjx"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.754533 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67699f99c7-5rzsw" event={"ID":"7f393bd1-497e-4426-be4b-06f4c65f03f5","Type":"ContainerDied","Data":"7bcd805c84498830f593aeb64da766f66b0a41461d135877dc274dc480c91a1e"} Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.754615 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67699f99c7-5rzsw" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.758291 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-845b48bb89-v6rjx"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.759125 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" podStartSLOduration=7.759097416 podStartE2EDuration="7.759097416s" podCreationTimestamp="2026-02-23 18:51:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:07.746903682 +0000 UTC m=+1063.137389482" watchObservedRunningTime="2026-02-23 18:51:07.759097416 +0000 UTC m=+1063.149583216" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.772622 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-bc449878f-7drht" event={"ID":"5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a","Type":"ContainerDied","Data":"7af1c4a52ee31edea59b62628834908d3380794171146ff7c61474a35e60fecd"} Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.772656 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-bc449878f-7drht" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.775876 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d8a712c2-737f-49cd-802c-8baeb2d5a0d1","Type":"ContainerDied","Data":"297d59f9dc08a748aefe64e11b3ea2cf89252a451d186be8002d9ee196e8101a"} Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.775987 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.779021 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kv44j"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.787512 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" event={"ID":"93487b6e-adae-4467-bc6f-022380ad3028","Type":"ContainerStarted","Data":"6746e707b661d2c151acc16938b41a8497e3b07895dcab5c951680a153bc24a2"} Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.787562 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" event={"ID":"93487b6e-adae-4467-bc6f-022380ad3028","Type":"ContainerStarted","Data":"a066437ba182c4c169be2fea13f0a23c9ff5a1c4d94a842bf7acf90ccdc80e3a"} Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.788161 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kv44j"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.792778 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.810148 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67699f99c7-5rzsw"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.827352 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-67699f99c7-5rzsw"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.869859 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.945082 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.954295 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.955746 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.958183 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.958388 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.959736 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5df7bc8868-6w74x" podStartSLOduration=4.201346419 podStartE2EDuration="7.959723195s" podCreationTimestamp="2026-02-23 18:51:00 +0000 UTC" firstStartedPulling="2026-02-23 18:51:02.11706267 +0000 UTC m=+1057.507548470" lastFinishedPulling="2026-02-23 18:51:05.875439446 +0000 UTC m=+1061.265925246" observedRunningTime="2026-02-23 18:51:07.865738029 +0000 UTC m=+1063.256223829" watchObservedRunningTime="2026-02-23 18:51:07.959723195 +0000 UTC m=+1063.350208995" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.962556 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.973631 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 18:51:07 crc kubenswrapper[4768]: I0223 18:51:07.974570 4768 scope.go:117] "RemoveContainer" containerID="326211414bab06de6e3e320987bf4657737969405d6bbe387618b2d5d5b871a3" Feb 23 18:51:07 crc kubenswrapper[4768]: W0223 18:51:07.979532 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d4e76c7_6c4f_4e31_9a08_25bb4c5e1c44.slice/crio-ac3783155779c6fd0c3cb858bb686851ef046f1f2218d243dead4117eb33f599 WatchSource:0}: Error finding container ac3783155779c6fd0c3cb858bb686851ef046f1f2218d243dead4117eb33f599: Status 404 returned error can't find the container with id ac3783155779c6fd0c3cb858bb686851ef046f1f2218d243dead4117eb33f599 Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.034308 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-bc449878f-7drht"] Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.061336 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-bc449878f-7drht"] Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.065419 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bc688ffdb-gftft"] Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.071999 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41e166f4-a4aa-4185-b21d-36037d575748-logs\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.072055 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-scripts\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.072103 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-config-data\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.072149 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-config-data-custom\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.072208 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41e166f4-a4aa-4185-b21d-36037d575748-etc-machine-id\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.072235 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.072270 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-public-tls-certs\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.072298 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcf98\" (UniqueName: \"kubernetes.io/projected/41e166f4-a4aa-4185-b21d-36037d575748-kube-api-access-hcf98\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.072369 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.156621 4768 scope.go:117] "RemoveContainer" containerID="7e27f793c5fa041075e84feb23c85a4fc819ef5d44075425346595d23c8eed1d" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.173878 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-config-data-custom\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.174333 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41e166f4-a4aa-4185-b21d-36037d575748-etc-machine-id\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.174354 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.174375 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-public-tls-certs\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.174399 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcf98\" (UniqueName: \"kubernetes.io/projected/41e166f4-a4aa-4185-b21d-36037d575748-kube-api-access-hcf98\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.174431 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.174493 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41e166f4-a4aa-4185-b21d-36037d575748-logs\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.174516 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-scripts\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.174558 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-config-data\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.174791 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41e166f4-a4aa-4185-b21d-36037d575748-etc-machine-id\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.175997 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/41e166f4-a4aa-4185-b21d-36037d575748-logs\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.188900 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-config-data\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.193549 4768 scope.go:117] "RemoveContainer" containerID="63d40138d2eb716a1d2e40353196b5d9f35886644d690555e5057655ab22c04e" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.199237 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.200689 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-public-tls-certs\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.201833 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.210065 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-scripts\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.210746 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/41e166f4-a4aa-4185-b21d-36037d575748-config-data-custom\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.218969 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcf98\" (UniqueName: \"kubernetes.io/projected/41e166f4-a4aa-4185-b21d-36037d575748-kube-api-access-hcf98\") pod \"cinder-api-0\" (UID: \"41e166f4-a4aa-4185-b21d-36037d575748\") " pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.280519 4768 scope.go:117] "RemoveContainer" containerID="2057552940e16024c54a400a460ad657dadd3fae60903ad57b98c61599485e70" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.309932 4768 scope.go:117] "RemoveContainer" containerID="7e27f793c5fa041075e84feb23c85a4fc819ef5d44075425346595d23c8eed1d" Feb 23 18:51:08 crc kubenswrapper[4768]: E0223 18:51:08.310522 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e27f793c5fa041075e84feb23c85a4fc819ef5d44075425346595d23c8eed1d\": container with ID starting with 7e27f793c5fa041075e84feb23c85a4fc819ef5d44075425346595d23c8eed1d not found: ID does not exist" containerID="7e27f793c5fa041075e84feb23c85a4fc819ef5d44075425346595d23c8eed1d" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.310553 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e27f793c5fa041075e84feb23c85a4fc819ef5d44075425346595d23c8eed1d"} err="failed to get container status \"7e27f793c5fa041075e84feb23c85a4fc819ef5d44075425346595d23c8eed1d\": rpc error: code = NotFound desc = could not find container \"7e27f793c5fa041075e84feb23c85a4fc819ef5d44075425346595d23c8eed1d\": container with ID starting with 7e27f793c5fa041075e84feb23c85a4fc819ef5d44075425346595d23c8eed1d not found: ID does not exist" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.310586 4768 scope.go:117] "RemoveContainer" containerID="63d40138d2eb716a1d2e40353196b5d9f35886644d690555e5057655ab22c04e" Feb 23 18:51:08 crc kubenswrapper[4768]: E0223 18:51:08.313398 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63d40138d2eb716a1d2e40353196b5d9f35886644d690555e5057655ab22c04e\": container with ID starting with 63d40138d2eb716a1d2e40353196b5d9f35886644d690555e5057655ab22c04e not found: ID does not exist" containerID="63d40138d2eb716a1d2e40353196b5d9f35886644d690555e5057655ab22c04e" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.313468 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63d40138d2eb716a1d2e40353196b5d9f35886644d690555e5057655ab22c04e"} err="failed to get container status \"63d40138d2eb716a1d2e40353196b5d9f35886644d690555e5057655ab22c04e\": rpc error: code = NotFound desc = could not find container \"63d40138d2eb716a1d2e40353196b5d9f35886644d690555e5057655ab22c04e\": container with ID starting with 63d40138d2eb716a1d2e40353196b5d9f35886644d690555e5057655ab22c04e not found: ID does not exist" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.315192 4768 scope.go:117] "RemoveContainer" containerID="2057552940e16024c54a400a460ad657dadd3fae60903ad57b98c61599485e70" Feb 23 18:51:08 crc kubenswrapper[4768]: E0223 18:51:08.318003 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2057552940e16024c54a400a460ad657dadd3fae60903ad57b98c61599485e70\": container with ID starting with 2057552940e16024c54a400a460ad657dadd3fae60903ad57b98c61599485e70 not found: ID does not exist" containerID="2057552940e16024c54a400a460ad657dadd3fae60903ad57b98c61599485e70" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.318028 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2057552940e16024c54a400a460ad657dadd3fae60903ad57b98c61599485e70"} err="failed to get container status \"2057552940e16024c54a400a460ad657dadd3fae60903ad57b98c61599485e70\": rpc error: code = NotFound desc = could not find container \"2057552940e16024c54a400a460ad657dadd3fae60903ad57b98c61599485e70\": container with ID starting with 2057552940e16024c54a400a460ad657dadd3fae60903ad57b98c61599485e70 not found: ID does not exist" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.318046 4768 scope.go:117] "RemoveContainer" containerID="bfbd1b0852eb637126d18d1b2134229fd82aed1aa4505c42863b581f9d46b36b" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.464910 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.503621 4768 scope.go:117] "RemoveContainer" containerID="cba4c852570c7fb6a1f0f05588013260695c7b051a725fad0743d6f4e1f6dab8" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.506078 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.585744 4768 scope.go:117] "RemoveContainer" containerID="c569cc9ba619df1b2cada5105fae786aba2b94fd34ece8f1c107172ce3fc5e44" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.814125 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-546cfc7689-gsp5x" event={"ID":"e861983f-c70e-47f3-936d-202ae74a1144","Type":"ContainerStarted","Data":"136a307ad4d5bd1267ec6482b88213ce933d55acfa611e4d3fe87fb1da75965e"} Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.834631 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"766c3286-0e91-45d1-81a4-d06fdcf1e8d4","Type":"ContainerStarted","Data":"9ffe62cd69e989317026b2326e93d566bbf0a9ce7c41c789dbd3745a6867ae30"} Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.842966 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bc688ffdb-gftft" event={"ID":"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44","Type":"ContainerStarted","Data":"e8ada9f0af549b8da9dc3242e9fc3d4bc4c33757564a9e77467072541b3f5300"} Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.843013 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bc688ffdb-gftft" event={"ID":"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44","Type":"ContainerStarted","Data":"ac3783155779c6fd0c3cb858bb686851ef046f1f2218d243dead4117eb33f599"} Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.884446 4768 scope.go:117] "RemoveContainer" containerID="f9e3461a2c97be4605ebd45790a637f855c5964ae39517275b37c79e5e416163" Feb 23 18:51:08 crc kubenswrapper[4768]: I0223 18:51:08.971568 4768 scope.go:117] "RemoveContainer" containerID="20e9b5be445568f0da14b77d6bdbf202d7fe84c03e39de9c6b5e9a3cdfe1737d" Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.003900 4768 scope.go:117] "RemoveContainer" containerID="a0c40cd0f005020cade0d6c2228e86dfd735bf4d260f3b7bc1b9951bb5df6015" Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.008833 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 23 18:51:09 crc kubenswrapper[4768]: W0223 18:51:09.043798 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41e166f4_a4aa_4185_b21d_36037d575748.slice/crio-fdd561d9dba7fa4d6fd38c476c2fc2dea809805fb53f883d72a8490e78fce5fc WatchSource:0}: Error finding container fdd561d9dba7fa4d6fd38c476c2fc2dea809805fb53f883d72a8490e78fce5fc: Status 404 returned error can't find the container with id fdd561d9dba7fa4d6fd38c476c2fc2dea809805fb53f883d72a8490e78fce5fc Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.319739 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40891100-89e6-4bd1-9ea0-8707548ffee8" path="/var/lib/kubelet/pods/40891100-89e6-4bd1-9ea0-8707548ffee8/volumes" Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.321504 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a" path="/var/lib/kubelet/pods/5ce40341-e6fd-4f68-bbfa-cb67b0b3cd1a/volumes" Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.323324 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b4f1e75-6a30-4789-9b7f-85e92aed1581" path="/var/lib/kubelet/pods/6b4f1e75-6a30-4789-9b7f-85e92aed1581/volumes" Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.324431 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f393bd1-497e-4426-be4b-06f4c65f03f5" path="/var/lib/kubelet/pods/7f393bd1-497e-4426-be4b-06f4c65f03f5/volumes" Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.325214 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8a712c2-737f-49cd-802c-8baeb2d5a0d1" path="/var/lib/kubelet/pods/d8a712c2-737f-49cd-802c-8baeb2d5a0d1/volumes" Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.327009 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd" path="/var/lib/kubelet/pods/ea5cf921-46c4-4fec-8dfa-72f5ad7d4acd/volumes" Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.868904 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"766c3286-0e91-45d1-81a4-d06fdcf1e8d4","Type":"ContainerStarted","Data":"0074e7e3e52af085dabc712b9f23cb2c5260943006e063e55ddb5d8268252469"} Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.872232 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bc688ffdb-gftft" event={"ID":"2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44","Type":"ContainerStarted","Data":"38af39bf32c2bda71f5a33b9bc9142a2efac0f5b432e0f7df553be9babd3dad2"} Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.874325 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.874383 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.877014 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41e166f4-a4aa-4185-b21d-36037d575748","Type":"ContainerStarted","Data":"e10cc0be83c673afcb2842ec2a2a391db56d3c1cb924beb534388cc8544fc1ef"} Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.877062 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41e166f4-a4aa-4185-b21d-36037d575748","Type":"ContainerStarted","Data":"fdd561d9dba7fa4d6fd38c476c2fc2dea809805fb53f883d72a8490e78fce5fc"} Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.920237 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7bc688ffdb-gftft" podStartSLOduration=3.920216461 podStartE2EDuration="3.920216461s" podCreationTimestamp="2026-02-23 18:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:09.903747379 +0000 UTC m=+1065.294233199" watchObservedRunningTime="2026-02-23 18:51:09.920216461 +0000 UTC m=+1065.310702271" Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.920773 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-546cfc7689-gsp5x" event={"ID":"e861983f-c70e-47f3-936d-202ae74a1144","Type":"ContainerStarted","Data":"ac459805a020b63785921500a794436be4cc9061f5080108fe479745edd25c1c"} Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.922010 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:09 crc kubenswrapper[4768]: I0223 18:51:09.959331 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-546cfc7689-gsp5x" podStartSLOduration=4.959309202 podStartE2EDuration="4.959309202s" podCreationTimestamp="2026-02-23 18:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:09.956857235 +0000 UTC m=+1065.347343055" watchObservedRunningTime="2026-02-23 18:51:09.959309202 +0000 UTC m=+1065.349795002" Feb 23 18:51:10 crc kubenswrapper[4768]: I0223 18:51:10.439220 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 23 18:51:10 crc kubenswrapper[4768]: I0223 18:51:10.782820 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 23 18:51:10 crc kubenswrapper[4768]: I0223 18:51:10.940558 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"766c3286-0e91-45d1-81a4-d06fdcf1e8d4","Type":"ContainerStarted","Data":"756e9e3a604fd8311db65771801ca231fa0f51612c1a6e667808ab1788bd6a08"} Feb 23 18:51:10 crc kubenswrapper[4768]: I0223 18:51:10.941654 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"766c3286-0e91-45d1-81a4-d06fdcf1e8d4","Type":"ContainerStarted","Data":"f2d9f2ecec84384072a8efa4854b01bbf8cdede0e2444531ec254f0c9d2bc2f4"} Feb 23 18:51:10 crc kubenswrapper[4768]: I0223 18:51:10.945711 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"41e166f4-a4aa-4185-b21d-36037d575748","Type":"ContainerStarted","Data":"e15d2233026a23742cec6bd48f03a4ebc2fd23ad6216094963a9174c12605a49"} Feb 23 18:51:10 crc kubenswrapper[4768]: I0223 18:51:10.985553 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.985523391 podStartE2EDuration="3.985523391s" podCreationTimestamp="2026-02-23 18:51:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:10.966586532 +0000 UTC m=+1066.357072332" watchObservedRunningTime="2026-02-23 18:51:10.985523391 +0000 UTC m=+1066.376009201" Feb 23 18:51:11 crc kubenswrapper[4768]: I0223 18:51:11.028380 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 18:51:11 crc kubenswrapper[4768]: I0223 18:51:11.415459 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:51:11 crc kubenswrapper[4768]: I0223 18:51:11.476354 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-l2tjj"] Feb 23 18:51:11 crc kubenswrapper[4768]: I0223 18:51:11.482077 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" podUID="2534372c-ef07-45f2-917b-912873de873d" containerName="dnsmasq-dns" containerID="cri-o://4c522c96ef0f0450410afdc370df1bed13c1171cc78fc21bea2d3aac33fe2094" gracePeriod=10 Feb 23 18:51:11 crc kubenswrapper[4768]: I0223 18:51:11.965465 4768 generic.go:334] "Generic (PLEG): container finished" podID="2534372c-ef07-45f2-917b-912873de873d" containerID="4c522c96ef0f0450410afdc370df1bed13c1171cc78fc21bea2d3aac33fe2094" exitCode=0 Feb 23 18:51:11 crc kubenswrapper[4768]: I0223 18:51:11.965542 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" event={"ID":"2534372c-ef07-45f2-917b-912873de873d","Type":"ContainerDied","Data":"4c522c96ef0f0450410afdc370df1bed13c1171cc78fc21bea2d3aac33fe2094"} Feb 23 18:51:11 crc kubenswrapper[4768]: I0223 18:51:11.966520 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" containerName="cinder-scheduler" containerID="cri-o://f117b55c9ec30d729db06483f0c2e3f483a123d26d599395e01893583268ac59" gracePeriod=30 Feb 23 18:51:11 crc kubenswrapper[4768]: I0223 18:51:11.967323 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" containerName="probe" containerID="cri-o://15450062cce2e3be46045824db3438576669a2f97ac60f2d5cd0667df24f7868" gracePeriod=30 Feb 23 18:51:11 crc kubenswrapper[4768]: I0223 18:51:11.969533 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.254045 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.410863 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-config\") pod \"2534372c-ef07-45f2-917b-912873de873d\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.410943 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-ovsdbserver-sb\") pod \"2534372c-ef07-45f2-917b-912873de873d\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.410982 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-ovsdbserver-nb\") pod \"2534372c-ef07-45f2-917b-912873de873d\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.411060 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-589gh\" (UniqueName: \"kubernetes.io/projected/2534372c-ef07-45f2-917b-912873de873d-kube-api-access-589gh\") pod \"2534372c-ef07-45f2-917b-912873de873d\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.411109 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-dns-svc\") pod \"2534372c-ef07-45f2-917b-912873de873d\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.411215 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-dns-swift-storage-0\") pod \"2534372c-ef07-45f2-917b-912873de873d\" (UID: \"2534372c-ef07-45f2-917b-912873de873d\") " Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.428589 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2534372c-ef07-45f2-917b-912873de873d-kube-api-access-589gh" (OuterVolumeSpecName: "kube-api-access-589gh") pod "2534372c-ef07-45f2-917b-912873de873d" (UID: "2534372c-ef07-45f2-917b-912873de873d"). InnerVolumeSpecName "kube-api-access-589gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.524518 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-589gh\" (UniqueName: \"kubernetes.io/projected/2534372c-ef07-45f2-917b-912873de873d-kube-api-access-589gh\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.535806 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2534372c-ef07-45f2-917b-912873de873d" (UID: "2534372c-ef07-45f2-917b-912873de873d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.552901 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-config" (OuterVolumeSpecName: "config") pod "2534372c-ef07-45f2-917b-912873de873d" (UID: "2534372c-ef07-45f2-917b-912873de873d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.563945 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2534372c-ef07-45f2-917b-912873de873d" (UID: "2534372c-ef07-45f2-917b-912873de873d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.580820 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2534372c-ef07-45f2-917b-912873de873d" (UID: "2534372c-ef07-45f2-917b-912873de873d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.589157 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2534372c-ef07-45f2-917b-912873de873d" (UID: "2534372c-ef07-45f2-917b-912873de873d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.631886 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.631941 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.631953 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.631964 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.631974 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2534372c-ef07-45f2-917b-912873de873d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.980923 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" event={"ID":"2534372c-ef07-45f2-917b-912873de873d","Type":"ContainerDied","Data":"3effaf80267a1d3214f70419a3b7c84b8186557fea4beb0dcdb33a0c2c28d6de"} Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.981190 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-l2tjj" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.981227 4768 scope.go:117] "RemoveContainer" containerID="4c522c96ef0f0450410afdc370df1bed13c1171cc78fc21bea2d3aac33fe2094" Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.999558 4768 generic.go:334] "Generic (PLEG): container finished" podID="ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" containerID="15450062cce2e3be46045824db3438576669a2f97ac60f2d5cd0667df24f7868" exitCode=0 Feb 23 18:51:12 crc kubenswrapper[4768]: I0223 18:51:12.999851 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807","Type":"ContainerDied","Data":"15450062cce2e3be46045824db3438576669a2f97ac60f2d5cd0667df24f7868"} Feb 23 18:51:13 crc kubenswrapper[4768]: I0223 18:51:13.012200 4768 scope.go:117] "RemoveContainer" containerID="37f8f9cd0693fcdf36364cb8f6d986e9e8ad77fd48d6881a99b6109e6cef4fde" Feb 23 18:51:13 crc kubenswrapper[4768]: I0223 18:51:13.025308 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-l2tjj"] Feb 23 18:51:13 crc kubenswrapper[4768]: I0223 18:51:13.041402 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-l2tjj"] Feb 23 18:51:13 crc kubenswrapper[4768]: I0223 18:51:13.062421 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:51:13 crc kubenswrapper[4768]: I0223 18:51:13.319522 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2534372c-ef07-45f2-917b-912873de873d" path="/var/lib/kubelet/pods/2534372c-ef07-45f2-917b-912873de873d/volumes" Feb 23 18:51:13 crc kubenswrapper[4768]: I0223 18:51:13.573141 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 18:51:13 crc kubenswrapper[4768]: I0223 18:51:13.799513 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 18:51:14 crc kubenswrapper[4768]: I0223 18:51:14.012514 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"766c3286-0e91-45d1-81a4-d06fdcf1e8d4","Type":"ContainerStarted","Data":"e912c246592de399cac7c5dab58f7249daf90eb63e35ae05d11db8bf2726b7d1"} Feb 23 18:51:14 crc kubenswrapper[4768]: I0223 18:51:14.013591 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 18:51:14 crc kubenswrapper[4768]: I0223 18:51:14.062874 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9260107189999998 podStartE2EDuration="7.062853529s" podCreationTimestamp="2026-02-23 18:51:07 +0000 UTC" firstStartedPulling="2026-02-23 18:51:08.532756481 +0000 UTC m=+1063.923242281" lastFinishedPulling="2026-02-23 18:51:12.669599281 +0000 UTC m=+1068.060085091" observedRunningTime="2026-02-23 18:51:14.055721084 +0000 UTC m=+1069.446206884" watchObservedRunningTime="2026-02-23 18:51:14.062853529 +0000 UTC m=+1069.453339329" Feb 23 18:51:14 crc kubenswrapper[4768]: I0223 18:51:14.212883 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:14 crc kubenswrapper[4768]: I0223 18:51:14.402696 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:14 crc kubenswrapper[4768]: I0223 18:51:14.679511 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:51:15 crc kubenswrapper[4768]: I0223 18:51:15.753166 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-58cc9986b4-t7tcs" Feb 23 18:51:15 crc kubenswrapper[4768]: I0223 18:51:15.825057 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84699c9d66-ghjfn"] Feb 23 18:51:15 crc kubenswrapper[4768]: I0223 18:51:15.825389 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84699c9d66-ghjfn" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon-log" containerID="cri-o://5beaf90673a241480f2721b2cb11d0bf9f251a26131590b7450193ab00ec0e69" gracePeriod=30 Feb 23 18:51:15 crc kubenswrapper[4768]: I0223 18:51:15.825962 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-84699c9d66-ghjfn" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon" containerID="cri-o://a8b896bc35a90342c52e7fd2aa30b84aefe074f3b241b438ecfa2e1f371e5920" gracePeriod=30 Feb 23 18:51:15 crc kubenswrapper[4768]: I0223 18:51:15.837069 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-84699c9d66-ghjfn" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.047820 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86445c674d-k7fnl_e9d24a80-bd92-4752-8786-147975b15301/neutron-httpd/2.log" Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.053893 4768 generic.go:334] "Generic (PLEG): container finished" podID="e9d24a80-bd92-4752-8786-147975b15301" containerID="4fdcd00c6f8050d41022065c8ac3d5e39db2b0c4c92ee63384055d43d993f166" exitCode=0 Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.055335 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86445c674d-k7fnl" event={"ID":"e9d24a80-bd92-4752-8786-147975b15301","Type":"ContainerDied","Data":"4fdcd00c6f8050d41022065c8ac3d5e39db2b0c4c92ee63384055d43d993f166"} Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.265988 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86445c674d-k7fnl_e9d24a80-bd92-4752-8786-147975b15301/neutron-httpd/2.log" Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.266981 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.325387 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-ovndb-tls-certs\") pod \"e9d24a80-bd92-4752-8786-147975b15301\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.325613 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dwqc\" (UniqueName: \"kubernetes.io/projected/e9d24a80-bd92-4752-8786-147975b15301-kube-api-access-2dwqc\") pod \"e9d24a80-bd92-4752-8786-147975b15301\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.325668 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-combined-ca-bundle\") pod \"e9d24a80-bd92-4752-8786-147975b15301\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.325745 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-config\") pod \"e9d24a80-bd92-4752-8786-147975b15301\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.325914 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-httpd-config\") pod \"e9d24a80-bd92-4752-8786-147975b15301\" (UID: \"e9d24a80-bd92-4752-8786-147975b15301\") " Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.341733 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d24a80-bd92-4752-8786-147975b15301-kube-api-access-2dwqc" (OuterVolumeSpecName: "kube-api-access-2dwqc") pod "e9d24a80-bd92-4752-8786-147975b15301" (UID: "e9d24a80-bd92-4752-8786-147975b15301"). InnerVolumeSpecName "kube-api-access-2dwqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.346519 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "e9d24a80-bd92-4752-8786-147975b15301" (UID: "e9d24a80-bd92-4752-8786-147975b15301"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.429463 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dwqc\" (UniqueName: \"kubernetes.io/projected/e9d24a80-bd92-4752-8786-147975b15301-kube-api-access-2dwqc\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.429809 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.441093 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-config" (OuterVolumeSpecName: "config") pod "e9d24a80-bd92-4752-8786-147975b15301" (UID: "e9d24a80-bd92-4752-8786-147975b15301"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.442393 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9d24a80-bd92-4752-8786-147975b15301" (UID: "e9d24a80-bd92-4752-8786-147975b15301"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.478275 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "e9d24a80-bd92-4752-8786-147975b15301" (UID: "e9d24a80-bd92-4752-8786-147975b15301"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.532584 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.532623 4768 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:16 crc kubenswrapper[4768]: I0223 18:51:16.532638 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9d24a80-bd92-4752-8786-147975b15301-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.068953 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-86445c674d-k7fnl_e9d24a80-bd92-4752-8786-147975b15301/neutron-httpd/2.log" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.070314 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86445c674d-k7fnl" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.070341 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86445c674d-k7fnl" event={"ID":"e9d24a80-bd92-4752-8786-147975b15301","Type":"ContainerDied","Data":"a5402175a7e7229c709e0919d9fc24caef055d03c856f1174d6560ef1eb2e702"} Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.070417 4768 scope.go:117] "RemoveContainer" containerID="8df7addf879faa1157ac93cad44dc2f5410a91d009a7a5a0ea5851cd81e98d8c" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.097671 4768 generic.go:334] "Generic (PLEG): container finished" podID="ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" containerID="f117b55c9ec30d729db06483f0c2e3f483a123d26d599395e01893583268ac59" exitCode=0 Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.097765 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807","Type":"ContainerDied","Data":"f117b55c9ec30d729db06483f0c2e3f483a123d26d599395e01893583268ac59"} Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.110410 4768 scope.go:117] "RemoveContainer" containerID="4fdcd00c6f8050d41022065c8ac3d5e39db2b0c4c92ee63384055d43d993f166" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.155641 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86445c674d-k7fnl"] Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.165789 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-86445c674d-k7fnl"] Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.321184 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d24a80-bd92-4752-8786-147975b15301" path="/var/lib/kubelet/pods/e9d24a80-bd92-4752-8786-147975b15301/volumes" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.529926 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.554435 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-etc-machine-id\") pod \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.554583 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-combined-ca-bundle\") pod \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.554614 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-scripts\") pod \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.554820 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" (UID: "ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.554859 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-config-data-custom\") pod \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.554927 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-config-data\") pod \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.554982 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx2s9\" (UniqueName: \"kubernetes.io/projected/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-kube-api-access-xx2s9\") pod \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\" (UID: \"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807\") " Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.555660 4768 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.561787 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-scripts" (OuterVolumeSpecName: "scripts") pod "ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" (UID: "ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.564194 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-kube-api-access-xx2s9" (OuterVolumeSpecName: "kube-api-access-xx2s9") pod "ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" (UID: "ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807"). InnerVolumeSpecName "kube-api-access-xx2s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.572087 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" (UID: "ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.657980 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.658021 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.658031 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx2s9\" (UniqueName: \"kubernetes.io/projected/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-kube-api-access-xx2s9\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.709996 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" (UID: "ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.742469 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-config-data" (OuterVolumeSpecName: "config-data") pod "ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" (UID: "ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.760241 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:17 crc kubenswrapper[4768]: I0223 18:51:17.760316 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.109518 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807","Type":"ContainerDied","Data":"3bd0da21ce7173113245d0809e07a16565be004a3ac8064a834ce107a655a594"} Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.109604 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.109694 4768 scope.go:117] "RemoveContainer" containerID="15450062cce2e3be46045824db3438576669a2f97ac60f2d5cd0667df24f7868" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.135937 4768 scope.go:117] "RemoveContainer" containerID="f117b55c9ec30d729db06483f0c2e3f483a123d26d599395e01893583268ac59" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.154967 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.216310 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.256297 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 18:51:18 crc kubenswrapper[4768]: E0223 18:51:18.256675 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-httpd" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.256694 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-httpd" Feb 23 18:51:18 crc kubenswrapper[4768]: E0223 18:51:18.256708 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-httpd" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.256716 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-httpd" Feb 23 18:51:18 crc kubenswrapper[4768]: E0223 18:51:18.256725 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" containerName="probe" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.256731 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" containerName="probe" Feb 23 18:51:18 crc kubenswrapper[4768]: E0223 18:51:18.256742 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" containerName="cinder-scheduler" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.256748 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" containerName="cinder-scheduler" Feb 23 18:51:18 crc kubenswrapper[4768]: E0223 18:51:18.256761 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2534372c-ef07-45f2-917b-912873de873d" containerName="dnsmasq-dns" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.256769 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2534372c-ef07-45f2-917b-912873de873d" containerName="dnsmasq-dns" Feb 23 18:51:18 crc kubenswrapper[4768]: E0223 18:51:18.256788 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2534372c-ef07-45f2-917b-912873de873d" containerName="init" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.256795 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2534372c-ef07-45f2-917b-912873de873d" containerName="init" Feb 23 18:51:18 crc kubenswrapper[4768]: E0223 18:51:18.256806 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-api" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.256812 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-api" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.256973 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" containerName="cinder-scheduler" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.256991 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" containerName="probe" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.257000 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-httpd" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.257011 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-httpd" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.257021 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-api" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.257030 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2534372c-ef07-45f2-917b-912873de873d" containerName="dnsmasq-dns" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.257041 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-httpd" Feb 23 18:51:18 crc kubenswrapper[4768]: E0223 18:51:18.257221 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-httpd" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.257230 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d24a80-bd92-4752-8786-147975b15301" containerName="neutron-httpd" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.257991 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.262751 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.262850 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.375878 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-config-data\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.376365 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.376627 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.376724 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl9dq\" (UniqueName: \"kubernetes.io/projected/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-kube-api-access-tl9dq\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.376924 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-scripts\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.377058 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.479095 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.479207 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.479237 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl9dq\" (UniqueName: \"kubernetes.io/projected/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-kube-api-access-tl9dq\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.479357 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-scripts\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.479402 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.479446 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-config-data\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.480498 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.494438 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-scripts\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.494470 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.494585 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.494939 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-config-data\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.498445 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl9dq\" (UniqueName: \"kubernetes.io/projected/3ef90267-50a1-45c4-9c1e-95f2ce0bce4b-kube-api-access-tl9dq\") pod \"cinder-scheduler-0\" (UID: \"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b\") " pod="openstack/cinder-scheduler-0" Feb 23 18:51:18 crc kubenswrapper[4768]: I0223 18:51:18.585373 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 23 18:51:19 crc kubenswrapper[4768]: I0223 18:51:19.132711 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 23 18:51:19 crc kubenswrapper[4768]: W0223 18:51:19.143455 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ef90267_50a1_45c4_9c1e_95f2ce0bce4b.slice/crio-2c5eec51cbd160278d59f315b06d9e3f44d0b65cd5a2cdfce03e1526db54d0b7 WatchSource:0}: Error finding container 2c5eec51cbd160278d59f315b06d9e3f44d0b65cd5a2cdfce03e1526db54d0b7: Status 404 returned error can't find the container with id 2c5eec51cbd160278d59f315b06d9e3f44d0b65cd5a2cdfce03e1526db54d0b7 Feb 23 18:51:19 crc kubenswrapper[4768]: I0223 18:51:19.188688 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:19 crc kubenswrapper[4768]: I0223 18:51:19.325582 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807" path="/var/lib/kubelet/pods/ae0bd0ea-0c8e-4ac5-aea6-b39f8769f807/volumes" Feb 23 18:51:19 crc kubenswrapper[4768]: I0223 18:51:19.646689 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bc688ffdb-gftft" Feb 23 18:51:19 crc kubenswrapper[4768]: I0223 18:51:19.714392 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5fbfdc6854-l4dxh"] Feb 23 18:51:19 crc kubenswrapper[4768]: I0223 18:51:19.714999 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5fbfdc6854-l4dxh" podUID="1db9dd83-857d-446f-ae79-6a0d0a4bda0a" containerName="barbican-api-log" containerID="cri-o://a9b0965d2ddf697690d6f8212c92514cba2d4aa4404268d1e5361473cfe5275c" gracePeriod=30 Feb 23 18:51:19 crc kubenswrapper[4768]: I0223 18:51:19.715481 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5fbfdc6854-l4dxh" podUID="1db9dd83-857d-446f-ae79-6a0d0a4bda0a" containerName="barbican-api" containerID="cri-o://7c96206d9b25176eff8047bab2de7452e702c0f1ff9b66bfd26c118c0357ad7e" gracePeriod=30 Feb 23 18:51:20 crc kubenswrapper[4768]: I0223 18:51:20.143683 4768 generic.go:334] "Generic (PLEG): container finished" podID="1db9dd83-857d-446f-ae79-6a0d0a4bda0a" containerID="a9b0965d2ddf697690d6f8212c92514cba2d4aa4404268d1e5361473cfe5275c" exitCode=143 Feb 23 18:51:20 crc kubenswrapper[4768]: I0223 18:51:20.143768 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbfdc6854-l4dxh" event={"ID":"1db9dd83-857d-446f-ae79-6a0d0a4bda0a","Type":"ContainerDied","Data":"a9b0965d2ddf697690d6f8212c92514cba2d4aa4404268d1e5361473cfe5275c"} Feb 23 18:51:20 crc kubenswrapper[4768]: I0223 18:51:20.146692 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b","Type":"ContainerStarted","Data":"28a244c1d316300f60011de427fa40a01cfbdf06ac5d3bb9b848bd9d0e0b49e4"} Feb 23 18:51:20 crc kubenswrapper[4768]: I0223 18:51:20.146725 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b","Type":"ContainerStarted","Data":"2c5eec51cbd160278d59f315b06d9e3f44d0b65cd5a2cdfce03e1526db54d0b7"} Feb 23 18:51:20 crc kubenswrapper[4768]: I0223 18:51:20.280179 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-84699c9d66-ghjfn" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:35942->10.217.0.149:8443: read: connection reset by peer" Feb 23 18:51:20 crc kubenswrapper[4768]: I0223 18:51:20.352986 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:51:20 crc kubenswrapper[4768]: I0223 18:51:20.353238 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:51:20 crc kubenswrapper[4768]: I0223 18:51:20.719000 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-84699c9d66-ghjfn" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 23 18:51:21 crc kubenswrapper[4768]: I0223 18:51:21.107894 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 23 18:51:21 crc kubenswrapper[4768]: I0223 18:51:21.157445 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3ef90267-50a1-45c4-9c1e-95f2ce0bce4b","Type":"ContainerStarted","Data":"09e9d1cb383b4d2b7f3372344ad43ee5bedcccccb541323be3b55beda4d17b65"} Feb 23 18:51:21 crc kubenswrapper[4768]: I0223 18:51:21.184395 4768 generic.go:334] "Generic (PLEG): container finished" podID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerID="a8b896bc35a90342c52e7fd2aa30b84aefe074f3b241b438ecfa2e1f371e5920" exitCode=0 Feb 23 18:51:21 crc kubenswrapper[4768]: I0223 18:51:21.184520 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84699c9d66-ghjfn" event={"ID":"c46ebaa2-3910-4025-8420-71eb83b3a909","Type":"ContainerDied","Data":"a8b896bc35a90342c52e7fd2aa30b84aefe074f3b241b438ecfa2e1f371e5920"} Feb 23 18:51:21 crc kubenswrapper[4768]: I0223 18:51:21.191124 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.191110842 podStartE2EDuration="3.191110842s" podCreationTimestamp="2026-02-23 18:51:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:21.181803916 +0000 UTC m=+1076.572289716" watchObservedRunningTime="2026-02-23 18:51:21.191110842 +0000 UTC m=+1076.581596642" Feb 23 18:51:21 crc kubenswrapper[4768]: I0223 18:51:21.482065 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7f7bc597d-jphlt" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.501005 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.503624 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.508618 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.508826 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9q55n" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.509005 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.512696 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.608354 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.647156 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-574fcfd8cb-8sv54" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.685516 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-openstack-config\") pod \"openstackclient\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.685809 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.685971 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-openstack-config-secret\") pod \"openstackclient\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.686255 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2pl9\" (UniqueName: \"kubernetes.io/projected/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-kube-api-access-t2pl9\") pod \"openstackclient\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.722924 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6cdff58f68-7n8ch"] Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.723439 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6cdff58f68-7n8ch" podUID="a54b90e0-5929-42b7-94bc-8eb916ce8bde" containerName="placement-log" containerID="cri-o://020d0d98589f3508180b9e8f1cc77361ae54bab645d684cbdb0d76775d09bb3c" gracePeriod=30 Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.723960 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6cdff58f68-7n8ch" podUID="a54b90e0-5929-42b7-94bc-8eb916ce8bde" containerName="placement-api" containerID="cri-o://b3b281fb91b9a51cf32e69a11331e5cb0b62fa031b0026402ec1ee29425193c9" gracePeriod=30 Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.792405 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-openstack-config\") pod \"openstackclient\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.792473 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.792504 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-openstack-config-secret\") pod \"openstackclient\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.792621 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2pl9\" (UniqueName: \"kubernetes.io/projected/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-kube-api-access-t2pl9\") pod \"openstackclient\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.795381 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-openstack-config\") pod \"openstackclient\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.807978 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.816921 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2pl9\" (UniqueName: \"kubernetes.io/projected/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-kube-api-access-t2pl9\") pod \"openstackclient\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.836134 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-openstack-config-secret\") pod \"openstackclient\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.940041 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.941131 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.965481 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.985565 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 23 18:51:22 crc kubenswrapper[4768]: I0223 18:51:22.987450 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:22.999392 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.065817 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5fbfdc6854-l4dxh" podUID="1db9dd83-857d-446f-ae79-6a0d0a4bda0a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:39016->10.217.0.167:9311: read: connection reset by peer" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.066262 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5fbfdc6854-l4dxh" podUID="1db9dd83-857d-446f-ae79-6a0d0a4bda0a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.167:9311/healthcheck\": read tcp 10.217.0.2:39030->10.217.0.167:9311: read: connection reset by peer" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.099063 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fa93987-e84a-4fa8-97ab-4df24aabb201-openstack-config\") pod \"openstackclient\" (UID: \"7fa93987-e84a-4fa8-97ab-4df24aabb201\") " pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.099136 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmkzj\" (UniqueName: \"kubernetes.io/projected/7fa93987-e84a-4fa8-97ab-4df24aabb201-kube-api-access-vmkzj\") pod \"openstackclient\" (UID: \"7fa93987-e84a-4fa8-97ab-4df24aabb201\") " pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.099191 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa93987-e84a-4fa8-97ab-4df24aabb201-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7fa93987-e84a-4fa8-97ab-4df24aabb201\") " pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.099236 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fa93987-e84a-4fa8-97ab-4df24aabb201-openstack-config-secret\") pod \"openstackclient\" (UID: \"7fa93987-e84a-4fa8-97ab-4df24aabb201\") " pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: E0223 18:51:23.105851 4768 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 23 18:51:23 crc kubenswrapper[4768]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_d646c15c-f058-4ca7-ab35-3da4bdb0d60d_0(3e3be99ab6f86e1138022baadf8909506c83e342ac8044eec0fe289ea3022f0a): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3e3be99ab6f86e1138022baadf8909506c83e342ac8044eec0fe289ea3022f0a" Netns:"/var/run/netns/f6a41e03-dc7c-491e-95f4-8fac8794d622" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=3e3be99ab6f86e1138022baadf8909506c83e342ac8044eec0fe289ea3022f0a;K8S_POD_UID=d646c15c-f058-4ca7-ab35-3da4bdb0d60d" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/d646c15c-f058-4ca7-ab35-3da4bdb0d60d]: expected pod UID "d646c15c-f058-4ca7-ab35-3da4bdb0d60d" but got "7fa93987-e84a-4fa8-97ab-4df24aabb201" from Kube API Feb 23 18:51:23 crc kubenswrapper[4768]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 18:51:23 crc kubenswrapper[4768]: > Feb 23 18:51:23 crc kubenswrapper[4768]: E0223 18:51:23.105921 4768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 23 18:51:23 crc kubenswrapper[4768]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_d646c15c-f058-4ca7-ab35-3da4bdb0d60d_0(3e3be99ab6f86e1138022baadf8909506c83e342ac8044eec0fe289ea3022f0a): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3e3be99ab6f86e1138022baadf8909506c83e342ac8044eec0fe289ea3022f0a" Netns:"/var/run/netns/f6a41e03-dc7c-491e-95f4-8fac8794d622" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=3e3be99ab6f86e1138022baadf8909506c83e342ac8044eec0fe289ea3022f0a;K8S_POD_UID=d646c15c-f058-4ca7-ab35-3da4bdb0d60d" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/d646c15c-f058-4ca7-ab35-3da4bdb0d60d]: expected pod UID "d646c15c-f058-4ca7-ab35-3da4bdb0d60d" but got "7fa93987-e84a-4fa8-97ab-4df24aabb201" from Kube API Feb 23 18:51:23 crc kubenswrapper[4768]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 23 18:51:23 crc kubenswrapper[4768]: > pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.201906 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fa93987-e84a-4fa8-97ab-4df24aabb201-openstack-config\") pod \"openstackclient\" (UID: \"7fa93987-e84a-4fa8-97ab-4df24aabb201\") " pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.203267 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fa93987-e84a-4fa8-97ab-4df24aabb201-openstack-config\") pod \"openstackclient\" (UID: \"7fa93987-e84a-4fa8-97ab-4df24aabb201\") " pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.204589 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmkzj\" (UniqueName: \"kubernetes.io/projected/7fa93987-e84a-4fa8-97ab-4df24aabb201-kube-api-access-vmkzj\") pod \"openstackclient\" (UID: \"7fa93987-e84a-4fa8-97ab-4df24aabb201\") " pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.204673 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa93987-e84a-4fa8-97ab-4df24aabb201-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7fa93987-e84a-4fa8-97ab-4df24aabb201\") " pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.204756 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fa93987-e84a-4fa8-97ab-4df24aabb201-openstack-config-secret\") pod \"openstackclient\" (UID: \"7fa93987-e84a-4fa8-97ab-4df24aabb201\") " pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.213231 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fa93987-e84a-4fa8-97ab-4df24aabb201-openstack-config-secret\") pod \"openstackclient\" (UID: \"7fa93987-e84a-4fa8-97ab-4df24aabb201\") " pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.214714 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa93987-e84a-4fa8-97ab-4df24aabb201-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7fa93987-e84a-4fa8-97ab-4df24aabb201\") " pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.227401 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmkzj\" (UniqueName: \"kubernetes.io/projected/7fa93987-e84a-4fa8-97ab-4df24aabb201-kube-api-access-vmkzj\") pod \"openstackclient\" (UID: \"7fa93987-e84a-4fa8-97ab-4df24aabb201\") " pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.240285 4768 generic.go:334] "Generic (PLEG): container finished" podID="1db9dd83-857d-446f-ae79-6a0d0a4bda0a" containerID="7c96206d9b25176eff8047bab2de7452e702c0f1ff9b66bfd26c118c0357ad7e" exitCode=0 Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.240382 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbfdc6854-l4dxh" event={"ID":"1db9dd83-857d-446f-ae79-6a0d0a4bda0a","Type":"ContainerDied","Data":"7c96206d9b25176eff8047bab2de7452e702c0f1ff9b66bfd26c118c0357ad7e"} Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.251660 4768 generic.go:334] "Generic (PLEG): container finished" podID="a54b90e0-5929-42b7-94bc-8eb916ce8bde" containerID="020d0d98589f3508180b9e8f1cc77361ae54bab645d684cbdb0d76775d09bb3c" exitCode=143 Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.251758 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.252577 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cdff58f68-7n8ch" event={"ID":"a54b90e0-5929-42b7-94bc-8eb916ce8bde","Type":"ContainerDied","Data":"020d0d98589f3508180b9e8f1cc77361ae54bab645d684cbdb0d76775d09bb3c"} Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.261851 4768 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="d646c15c-f058-4ca7-ab35-3da4bdb0d60d" podUID="7fa93987-e84a-4fa8-97ab-4df24aabb201" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.266013 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.410002 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-combined-ca-bundle\") pod \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.410245 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-openstack-config-secret\") pod \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.410321 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-openstack-config\") pod \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.410501 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2pl9\" (UniqueName: \"kubernetes.io/projected/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-kube-api-access-t2pl9\") pod \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\" (UID: \"d646c15c-f058-4ca7-ab35-3da4bdb0d60d\") " Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.412963 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d646c15c-f058-4ca7-ab35-3da4bdb0d60d" (UID: "d646c15c-f058-4ca7-ab35-3da4bdb0d60d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.416486 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d646c15c-f058-4ca7-ab35-3da4bdb0d60d" (UID: "d646c15c-f058-4ca7-ab35-3da4bdb0d60d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.416518 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d646c15c-f058-4ca7-ab35-3da4bdb0d60d" (UID: "d646c15c-f058-4ca7-ab35-3da4bdb0d60d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.422480 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-kube-api-access-t2pl9" (OuterVolumeSpecName: "kube-api-access-t2pl9") pod "d646c15c-f058-4ca7-ab35-3da4bdb0d60d" (UID: "d646c15c-f058-4ca7-ab35-3da4bdb0d60d"). InnerVolumeSpecName "kube-api-access-t2pl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.511802 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.512824 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2pl9\" (UniqueName: \"kubernetes.io/projected/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-kube-api-access-t2pl9\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.512866 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.512880 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.512892 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d646c15c-f058-4ca7-ab35-3da4bdb0d60d-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.557600 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.591412 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.719884 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-logs\") pod \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.720069 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-config-data\") pod \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.720105 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-config-data-custom\") pod \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.720151 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4428t\" (UniqueName: \"kubernetes.io/projected/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-kube-api-access-4428t\") pod \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.720211 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-combined-ca-bundle\") pod \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\" (UID: \"1db9dd83-857d-446f-ae79-6a0d0a4bda0a\") " Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.724226 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-logs" (OuterVolumeSpecName: "logs") pod "1db9dd83-857d-446f-ae79-6a0d0a4bda0a" (UID: "1db9dd83-857d-446f-ae79-6a0d0a4bda0a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.732478 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1db9dd83-857d-446f-ae79-6a0d0a4bda0a" (UID: "1db9dd83-857d-446f-ae79-6a0d0a4bda0a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.741131 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-kube-api-access-4428t" (OuterVolumeSpecName: "kube-api-access-4428t") pod "1db9dd83-857d-446f-ae79-6a0d0a4bda0a" (UID: "1db9dd83-857d-446f-ae79-6a0d0a4bda0a"). InnerVolumeSpecName "kube-api-access-4428t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.752294 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1db9dd83-857d-446f-ae79-6a0d0a4bda0a" (UID: "1db9dd83-857d-446f-ae79-6a0d0a4bda0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.809020 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-config-data" (OuterVolumeSpecName: "config-data") pod "1db9dd83-857d-446f-ae79-6a0d0a4bda0a" (UID: "1db9dd83-857d-446f-ae79-6a0d0a4bda0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.824238 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.824308 4768 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.824326 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4428t\" (UniqueName: \"kubernetes.io/projected/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-kube-api-access-4428t\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.824344 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:23 crc kubenswrapper[4768]: I0223 18:51:23.824357 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1db9dd83-857d-446f-ae79-6a0d0a4bda0a-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:24 crc kubenswrapper[4768]: I0223 18:51:24.074962 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 23 18:51:24 crc kubenswrapper[4768]: I0223 18:51:24.263758 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5fbfdc6854-l4dxh" event={"ID":"1db9dd83-857d-446f-ae79-6a0d0a4bda0a","Type":"ContainerDied","Data":"e98aa52a73d3e374c9c62bbe3b4398bbd7e9bf2da80f7b7b52de3eb4dad493f2"} Feb 23 18:51:24 crc kubenswrapper[4768]: I0223 18:51:24.263823 4768 scope.go:117] "RemoveContainer" containerID="7c96206d9b25176eff8047bab2de7452e702c0f1ff9b66bfd26c118c0357ad7e" Feb 23 18:51:24 crc kubenswrapper[4768]: I0223 18:51:24.263845 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5fbfdc6854-l4dxh" Feb 23 18:51:24 crc kubenswrapper[4768]: I0223 18:51:24.267917 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7fa93987-e84a-4fa8-97ab-4df24aabb201","Type":"ContainerStarted","Data":"a9381b5b8079d3ffdfa0943c09bcf12d7b058694d42987acbd28a7da0ba15d5a"} Feb 23 18:51:24 crc kubenswrapper[4768]: I0223 18:51:24.267970 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 23 18:51:24 crc kubenswrapper[4768]: I0223 18:51:24.289933 4768 scope.go:117] "RemoveContainer" containerID="a9b0965d2ddf697690d6f8212c92514cba2d4aa4404268d1e5361473cfe5275c" Feb 23 18:51:24 crc kubenswrapper[4768]: I0223 18:51:24.303167 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5fbfdc6854-l4dxh"] Feb 23 18:51:24 crc kubenswrapper[4768]: I0223 18:51:24.304023 4768 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="d646c15c-f058-4ca7-ab35-3da4bdb0d60d" podUID="7fa93987-e84a-4fa8-97ab-4df24aabb201" Feb 23 18:51:24 crc kubenswrapper[4768]: I0223 18:51:24.312030 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5fbfdc6854-l4dxh"] Feb 23 18:51:25 crc kubenswrapper[4768]: I0223 18:51:25.328581 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db9dd83-857d-446f-ae79-6a0d0a4bda0a" path="/var/lib/kubelet/pods/1db9dd83-857d-446f-ae79-6a0d0a4bda0a/volumes" Feb 23 18:51:25 crc kubenswrapper[4768]: I0223 18:51:25.329608 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d646c15c-f058-4ca7-ab35-3da4bdb0d60d" path="/var/lib/kubelet/pods/d646c15c-f058-4ca7-ab35-3da4bdb0d60d/volumes" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.313558 4768 generic.go:334] "Generic (PLEG): container finished" podID="a54b90e0-5929-42b7-94bc-8eb916ce8bde" containerID="b3b281fb91b9a51cf32e69a11331e5cb0b62fa031b0026402ec1ee29425193c9" exitCode=0 Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.313642 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cdff58f68-7n8ch" event={"ID":"a54b90e0-5929-42b7-94bc-8eb916ce8bde","Type":"ContainerDied","Data":"b3b281fb91b9a51cf32e69a11331e5cb0b62fa031b0026402ec1ee29425193c9"} Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.663465 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.786820 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-scripts\") pod \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.787297 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-config-data\") pod \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.787610 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-internal-tls-certs\") pod \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.787824 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw784\" (UniqueName: \"kubernetes.io/projected/a54b90e0-5929-42b7-94bc-8eb916ce8bde-kube-api-access-tw784\") pod \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.787936 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a54b90e0-5929-42b7-94bc-8eb916ce8bde-logs\") pod \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.788067 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-public-tls-certs\") pod \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.788222 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-combined-ca-bundle\") pod \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\" (UID: \"a54b90e0-5929-42b7-94bc-8eb916ce8bde\") " Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.788728 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a54b90e0-5929-42b7-94bc-8eb916ce8bde-logs" (OuterVolumeSpecName: "logs") pod "a54b90e0-5929-42b7-94bc-8eb916ce8bde" (UID: "a54b90e0-5929-42b7-94bc-8eb916ce8bde"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.791064 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a54b90e0-5929-42b7-94bc-8eb916ce8bde-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.796148 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-scripts" (OuterVolumeSpecName: "scripts") pod "a54b90e0-5929-42b7-94bc-8eb916ce8bde" (UID: "a54b90e0-5929-42b7-94bc-8eb916ce8bde"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.807452 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a54b90e0-5929-42b7-94bc-8eb916ce8bde-kube-api-access-tw784" (OuterVolumeSpecName: "kube-api-access-tw784") pod "a54b90e0-5929-42b7-94bc-8eb916ce8bde" (UID: "a54b90e0-5929-42b7-94bc-8eb916ce8bde"). InnerVolumeSpecName "kube-api-access-tw784". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.875397 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-config-data" (OuterVolumeSpecName: "config-data") pod "a54b90e0-5929-42b7-94bc-8eb916ce8bde" (UID: "a54b90e0-5929-42b7-94bc-8eb916ce8bde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.884858 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a54b90e0-5929-42b7-94bc-8eb916ce8bde" (UID: "a54b90e0-5929-42b7-94bc-8eb916ce8bde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.892924 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw784\" (UniqueName: \"kubernetes.io/projected/a54b90e0-5929-42b7-94bc-8eb916ce8bde-kube-api-access-tw784\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.892975 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.892992 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.893000 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.938383 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a54b90e0-5929-42b7-94bc-8eb916ce8bde" (UID: "a54b90e0-5929-42b7-94bc-8eb916ce8bde"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.948602 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a54b90e0-5929-42b7-94bc-8eb916ce8bde" (UID: "a54b90e0-5929-42b7-94bc-8eb916ce8bde"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.994949 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:26 crc kubenswrapper[4768]: I0223 18:51:26.995333 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a54b90e0-5929-42b7-94bc-8eb916ce8bde-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.353811 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6cdff58f68-7n8ch" event={"ID":"a54b90e0-5929-42b7-94bc-8eb916ce8bde","Type":"ContainerDied","Data":"4c6b9d20115e3e98a712fb9de38bfc508bd98ce63d226cbf4d64a29ffb333f51"} Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.354554 4768 scope.go:117] "RemoveContainer" containerID="b3b281fb91b9a51cf32e69a11331e5cb0b62fa031b0026402ec1ee29425193c9" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.354717 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6cdff58f68-7n8ch" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.386725 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6cdff58f68-7n8ch"] Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.395841 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6cdff58f68-7n8ch"] Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.402546 4768 scope.go:117] "RemoveContainer" containerID="020d0d98589f3508180b9e8f1cc77361ae54bab645d684cbdb0d76775d09bb3c" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.772474 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-66dcf5bf6c-4q2hn"] Feb 23 18:51:27 crc kubenswrapper[4768]: E0223 18:51:27.772948 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db9dd83-857d-446f-ae79-6a0d0a4bda0a" containerName="barbican-api" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.772961 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db9dd83-857d-446f-ae79-6a0d0a4bda0a" containerName="barbican-api" Feb 23 18:51:27 crc kubenswrapper[4768]: E0223 18:51:27.772977 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a54b90e0-5929-42b7-94bc-8eb916ce8bde" containerName="placement-log" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.772983 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a54b90e0-5929-42b7-94bc-8eb916ce8bde" containerName="placement-log" Feb 23 18:51:27 crc kubenswrapper[4768]: E0223 18:51:27.773007 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a54b90e0-5929-42b7-94bc-8eb916ce8bde" containerName="placement-api" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.773013 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a54b90e0-5929-42b7-94bc-8eb916ce8bde" containerName="placement-api" Feb 23 18:51:27 crc kubenswrapper[4768]: E0223 18:51:27.773031 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db9dd83-857d-446f-ae79-6a0d0a4bda0a" containerName="barbican-api-log" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.773037 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db9dd83-857d-446f-ae79-6a0d0a4bda0a" containerName="barbican-api-log" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.773205 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a54b90e0-5929-42b7-94bc-8eb916ce8bde" containerName="placement-log" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.773219 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a54b90e0-5929-42b7-94bc-8eb916ce8bde" containerName="placement-api" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.773232 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db9dd83-857d-446f-ae79-6a0d0a4bda0a" containerName="barbican-api" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.773252 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db9dd83-857d-446f-ae79-6a0d0a4bda0a" containerName="barbican-api-log" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.774316 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.776518 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.776751 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.787659 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-66dcf5bf6c-4q2hn"] Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.805477 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.918664 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwbg9\" (UniqueName: \"kubernetes.io/projected/70d5ee44-4e4a-4f31-8104-a72d66f78d72-kube-api-access-qwbg9\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.918725 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d5ee44-4e4a-4f31-8104-a72d66f78d72-public-tls-certs\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.918836 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d5ee44-4e4a-4f31-8104-a72d66f78d72-internal-tls-certs\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.918938 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d5ee44-4e4a-4f31-8104-a72d66f78d72-log-httpd\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.919182 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d5ee44-4e4a-4f31-8104-a72d66f78d72-combined-ca-bundle\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.919428 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d5ee44-4e4a-4f31-8104-a72d66f78d72-config-data\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.919528 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70d5ee44-4e4a-4f31-8104-a72d66f78d72-etc-swift\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:27 crc kubenswrapper[4768]: I0223 18:51:27.919668 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d5ee44-4e4a-4f31-8104-a72d66f78d72-run-httpd\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.020951 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d5ee44-4e4a-4f31-8104-a72d66f78d72-config-data\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.021254 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70d5ee44-4e4a-4f31-8104-a72d66f78d72-etc-swift\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.021382 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d5ee44-4e4a-4f31-8104-a72d66f78d72-run-httpd\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.021514 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwbg9\" (UniqueName: \"kubernetes.io/projected/70d5ee44-4e4a-4f31-8104-a72d66f78d72-kube-api-access-qwbg9\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.021600 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d5ee44-4e4a-4f31-8104-a72d66f78d72-public-tls-certs\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.021695 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d5ee44-4e4a-4f31-8104-a72d66f78d72-internal-tls-certs\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.021783 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d5ee44-4e4a-4f31-8104-a72d66f78d72-log-httpd\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.021894 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d5ee44-4e4a-4f31-8104-a72d66f78d72-combined-ca-bundle\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.022119 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d5ee44-4e4a-4f31-8104-a72d66f78d72-run-httpd\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.022384 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/70d5ee44-4e4a-4f31-8104-a72d66f78d72-log-httpd\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.027917 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/70d5ee44-4e4a-4f31-8104-a72d66f78d72-etc-swift\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.027958 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d5ee44-4e4a-4f31-8104-a72d66f78d72-public-tls-certs\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.028234 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d5ee44-4e4a-4f31-8104-a72d66f78d72-config-data\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.029180 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70d5ee44-4e4a-4f31-8104-a72d66f78d72-internal-tls-certs\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.031795 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d5ee44-4e4a-4f31-8104-a72d66f78d72-combined-ca-bundle\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.045774 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwbg9\" (UniqueName: \"kubernetes.io/projected/70d5ee44-4e4a-4f31-8104-a72d66f78d72-kube-api-access-qwbg9\") pod \"swift-proxy-66dcf5bf6c-4q2hn\" (UID: \"70d5ee44-4e4a-4f31-8104-a72d66f78d72\") " pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.108222 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.675585 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-66dcf5bf6c-4q2hn"] Feb 23 18:51:28 crc kubenswrapper[4768]: I0223 18:51:28.990632 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 23 18:51:29 crc kubenswrapper[4768]: I0223 18:51:29.328396 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a54b90e0-5929-42b7-94bc-8eb916ce8bde" path="/var/lib/kubelet/pods/a54b90e0-5929-42b7-94bc-8eb916ce8bde/volumes" Feb 23 18:51:29 crc kubenswrapper[4768]: I0223 18:51:29.574462 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:29 crc kubenswrapper[4768]: I0223 18:51:29.574928 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="ceilometer-central-agent" containerID="cri-o://0074e7e3e52af085dabc712b9f23cb2c5260943006e063e55ddb5d8268252469" gracePeriod=30 Feb 23 18:51:29 crc kubenswrapper[4768]: I0223 18:51:29.575553 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="ceilometer-notification-agent" containerID="cri-o://f2d9f2ecec84384072a8efa4854b01bbf8cdede0e2444531ec254f0c9d2bc2f4" gracePeriod=30 Feb 23 18:51:29 crc kubenswrapper[4768]: I0223 18:51:29.575580 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="sg-core" containerID="cri-o://756e9e3a604fd8311db65771801ca231fa0f51612c1a6e667808ab1788bd6a08" gracePeriod=30 Feb 23 18:51:29 crc kubenswrapper[4768]: I0223 18:51:29.575803 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="proxy-httpd" containerID="cri-o://e912c246592de399cac7c5dab58f7249daf90eb63e35ae05d11db8bf2726b7d1" gracePeriod=30 Feb 23 18:51:29 crc kubenswrapper[4768]: I0223 18:51:29.586696 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.170:3000/\": EOF" Feb 23 18:51:30 crc kubenswrapper[4768]: I0223 18:51:30.388902 4768 generic.go:334] "Generic (PLEG): container finished" podID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerID="e912c246592de399cac7c5dab58f7249daf90eb63e35ae05d11db8bf2726b7d1" exitCode=0 Feb 23 18:51:30 crc kubenswrapper[4768]: I0223 18:51:30.390226 4768 generic.go:334] "Generic (PLEG): container finished" podID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerID="756e9e3a604fd8311db65771801ca231fa0f51612c1a6e667808ab1788bd6a08" exitCode=2 Feb 23 18:51:30 crc kubenswrapper[4768]: I0223 18:51:30.390362 4768 generic.go:334] "Generic (PLEG): container finished" podID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerID="0074e7e3e52af085dabc712b9f23cb2c5260943006e063e55ddb5d8268252469" exitCode=0 Feb 23 18:51:30 crc kubenswrapper[4768]: I0223 18:51:30.390454 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"766c3286-0e91-45d1-81a4-d06fdcf1e8d4","Type":"ContainerDied","Data":"e912c246592de399cac7c5dab58f7249daf90eb63e35ae05d11db8bf2726b7d1"} Feb 23 18:51:30 crc kubenswrapper[4768]: I0223 18:51:30.390575 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"766c3286-0e91-45d1-81a4-d06fdcf1e8d4","Type":"ContainerDied","Data":"756e9e3a604fd8311db65771801ca231fa0f51612c1a6e667808ab1788bd6a08"} Feb 23 18:51:30 crc kubenswrapper[4768]: I0223 18:51:30.390663 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"766c3286-0e91-45d1-81a4-d06fdcf1e8d4","Type":"ContainerDied","Data":"0074e7e3e52af085dabc712b9f23cb2c5260943006e063e55ddb5d8268252469"} Feb 23 18:51:30 crc kubenswrapper[4768]: I0223 18:51:30.719785 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-84699c9d66-ghjfn" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.073795 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-x4hdd"] Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.075783 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x4hdd" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.095418 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-x4hdd"] Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.146375 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124ee684-6570-4e6c-856b-516e1b2f793a-operator-scripts\") pod \"nova-api-db-create-x4hdd\" (UID: \"124ee684-6570-4e6c-856b-516e1b2f793a\") " pod="openstack/nova-api-db-create-x4hdd" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.146663 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjtms\" (UniqueName: \"kubernetes.io/projected/124ee684-6570-4e6c-856b-516e1b2f793a-kube-api-access-gjtms\") pod \"nova-api-db-create-x4hdd\" (UID: \"124ee684-6570-4e6c-856b-516e1b2f793a\") " pod="openstack/nova-api-db-create-x4hdd" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.223292 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a8c9-account-create-update-vhvc2"] Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.224939 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a8c9-account-create-update-vhvc2" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.228335 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.233310 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a8c9-account-create-update-vhvc2"] Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.254275 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124ee684-6570-4e6c-856b-516e1b2f793a-operator-scripts\") pod \"nova-api-db-create-x4hdd\" (UID: \"124ee684-6570-4e6c-856b-516e1b2f793a\") " pod="openstack/nova-api-db-create-x4hdd" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.254379 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjtms\" (UniqueName: \"kubernetes.io/projected/124ee684-6570-4e6c-856b-516e1b2f793a-kube-api-access-gjtms\") pod \"nova-api-db-create-x4hdd\" (UID: \"124ee684-6570-4e6c-856b-516e1b2f793a\") " pod="openstack/nova-api-db-create-x4hdd" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.255277 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124ee684-6570-4e6c-856b-516e1b2f793a-operator-scripts\") pod \"nova-api-db-create-x4hdd\" (UID: \"124ee684-6570-4e6c-856b-516e1b2f793a\") " pod="openstack/nova-api-db-create-x4hdd" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.281117 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjtms\" (UniqueName: \"kubernetes.io/projected/124ee684-6570-4e6c-856b-516e1b2f793a-kube-api-access-gjtms\") pod \"nova-api-db-create-x4hdd\" (UID: \"124ee684-6570-4e6c-856b-516e1b2f793a\") " pod="openstack/nova-api-db-create-x4hdd" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.356846 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c8735b-69ac-497c-8c20-08580587d926-operator-scripts\") pod \"nova-api-a8c9-account-create-update-vhvc2\" (UID: \"92c8735b-69ac-497c-8c20-08580587d926\") " pod="openstack/nova-api-a8c9-account-create-update-vhvc2" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.356895 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdb7r\" (UniqueName: \"kubernetes.io/projected/92c8735b-69ac-497c-8c20-08580587d926-kube-api-access-mdb7r\") pod \"nova-api-a8c9-account-create-update-vhvc2\" (UID: \"92c8735b-69ac-497c-8c20-08580587d926\") " pod="openstack/nova-api-a8c9-account-create-update-vhvc2" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.409076 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x4hdd" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.441446 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-lg5zn"] Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.459846 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c8735b-69ac-497c-8c20-08580587d926-operator-scripts\") pod \"nova-api-a8c9-account-create-update-vhvc2\" (UID: \"92c8735b-69ac-497c-8c20-08580587d926\") " pod="openstack/nova-api-a8c9-account-create-update-vhvc2" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.459916 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdb7r\" (UniqueName: \"kubernetes.io/projected/92c8735b-69ac-497c-8c20-08580587d926-kube-api-access-mdb7r\") pod \"nova-api-a8c9-account-create-update-vhvc2\" (UID: \"92c8735b-69ac-497c-8c20-08580587d926\") " pod="openstack/nova-api-a8c9-account-create-update-vhvc2" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.459997 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lg5zn" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.461699 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c8735b-69ac-497c-8c20-08580587d926-operator-scripts\") pod \"nova-api-a8c9-account-create-update-vhvc2\" (UID: \"92c8735b-69ac-497c-8c20-08580587d926\") " pod="openstack/nova-api-a8c9-account-create-update-vhvc2" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.478315 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdb7r\" (UniqueName: \"kubernetes.io/projected/92c8735b-69ac-497c-8c20-08580587d926-kube-api-access-mdb7r\") pod \"nova-api-a8c9-account-create-update-vhvc2\" (UID: \"92c8735b-69ac-497c-8c20-08580587d926\") " pod="openstack/nova-api-a8c9-account-create-update-vhvc2" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.478393 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8a6b-account-create-update-stzln"] Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.479770 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a6b-account-create-update-stzln" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.481453 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.491511 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8a6b-account-create-update-stzln"] Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.499462 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lg5zn"] Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.511529 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-b9bx5"] Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.512876 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b9bx5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.518447 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-b9bx5"] Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.559934 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a8c9-account-create-update-vhvc2" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.561364 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt7lf\" (UniqueName: \"kubernetes.io/projected/678921d0-cd54-4104-afdd-e6a47489b0e3-kube-api-access-xt7lf\") pod \"nova-cell0-db-create-lg5zn\" (UID: \"678921d0-cd54-4104-afdd-e6a47489b0e3\") " pod="openstack/nova-cell0-db-create-lg5zn" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.561411 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8148745-3469-4ca2-a2dd-bc459d1b5eb7-operator-scripts\") pod \"nova-cell1-db-create-b9bx5\" (UID: \"a8148745-3469-4ca2-a2dd-bc459d1b5eb7\") " pod="openstack/nova-cell1-db-create-b9bx5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.561487 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4q8q\" (UniqueName: \"kubernetes.io/projected/dd539a7e-17cc-4c2a-a066-fecd85ee2261-kube-api-access-q4q8q\") pod \"nova-cell0-8a6b-account-create-update-stzln\" (UID: \"dd539a7e-17cc-4c2a-a066-fecd85ee2261\") " pod="openstack/nova-cell0-8a6b-account-create-update-stzln" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.561515 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678921d0-cd54-4104-afdd-e6a47489b0e3-operator-scripts\") pod \"nova-cell0-db-create-lg5zn\" (UID: \"678921d0-cd54-4104-afdd-e6a47489b0e3\") " pod="openstack/nova-cell0-db-create-lg5zn" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.561553 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqmsw\" (UniqueName: \"kubernetes.io/projected/a8148745-3469-4ca2-a2dd-bc459d1b5eb7-kube-api-access-jqmsw\") pod \"nova-cell1-db-create-b9bx5\" (UID: \"a8148745-3469-4ca2-a2dd-bc459d1b5eb7\") " pod="openstack/nova-cell1-db-create-b9bx5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.561588 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd539a7e-17cc-4c2a-a066-fecd85ee2261-operator-scripts\") pod \"nova-cell0-8a6b-account-create-update-stzln\" (UID: \"dd539a7e-17cc-4c2a-a066-fecd85ee2261\") " pod="openstack/nova-cell0-8a6b-account-create-update-stzln" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.592469 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-250a-account-create-update-q99n5"] Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.593935 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-250a-account-create-update-q99n5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.597465 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.600941 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-250a-account-create-update-q99n5"] Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.664135 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlnlc\" (UniqueName: \"kubernetes.io/projected/7e834410-f86e-424f-81ac-73de81ffeb25-kube-api-access-mlnlc\") pod \"nova-cell1-250a-account-create-update-q99n5\" (UID: \"7e834410-f86e-424f-81ac-73de81ffeb25\") " pod="openstack/nova-cell1-250a-account-create-update-q99n5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.664234 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4q8q\" (UniqueName: \"kubernetes.io/projected/dd539a7e-17cc-4c2a-a066-fecd85ee2261-kube-api-access-q4q8q\") pod \"nova-cell0-8a6b-account-create-update-stzln\" (UID: \"dd539a7e-17cc-4c2a-a066-fecd85ee2261\") " pod="openstack/nova-cell0-8a6b-account-create-update-stzln" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.664329 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678921d0-cd54-4104-afdd-e6a47489b0e3-operator-scripts\") pod \"nova-cell0-db-create-lg5zn\" (UID: \"678921d0-cd54-4104-afdd-e6a47489b0e3\") " pod="openstack/nova-cell0-db-create-lg5zn" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.664401 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqmsw\" (UniqueName: \"kubernetes.io/projected/a8148745-3469-4ca2-a2dd-bc459d1b5eb7-kube-api-access-jqmsw\") pod \"nova-cell1-db-create-b9bx5\" (UID: \"a8148745-3469-4ca2-a2dd-bc459d1b5eb7\") " pod="openstack/nova-cell1-db-create-b9bx5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.664421 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd539a7e-17cc-4c2a-a066-fecd85ee2261-operator-scripts\") pod \"nova-cell0-8a6b-account-create-update-stzln\" (UID: \"dd539a7e-17cc-4c2a-a066-fecd85ee2261\") " pod="openstack/nova-cell0-8a6b-account-create-update-stzln" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.664497 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e834410-f86e-424f-81ac-73de81ffeb25-operator-scripts\") pod \"nova-cell1-250a-account-create-update-q99n5\" (UID: \"7e834410-f86e-424f-81ac-73de81ffeb25\") " pod="openstack/nova-cell1-250a-account-create-update-q99n5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.664531 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt7lf\" (UniqueName: \"kubernetes.io/projected/678921d0-cd54-4104-afdd-e6a47489b0e3-kube-api-access-xt7lf\") pod \"nova-cell0-db-create-lg5zn\" (UID: \"678921d0-cd54-4104-afdd-e6a47489b0e3\") " pod="openstack/nova-cell0-db-create-lg5zn" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.664590 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8148745-3469-4ca2-a2dd-bc459d1b5eb7-operator-scripts\") pod \"nova-cell1-db-create-b9bx5\" (UID: \"a8148745-3469-4ca2-a2dd-bc459d1b5eb7\") " pod="openstack/nova-cell1-db-create-b9bx5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.665497 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8148745-3469-4ca2-a2dd-bc459d1b5eb7-operator-scripts\") pod \"nova-cell1-db-create-b9bx5\" (UID: \"a8148745-3469-4ca2-a2dd-bc459d1b5eb7\") " pod="openstack/nova-cell1-db-create-b9bx5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.666028 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd539a7e-17cc-4c2a-a066-fecd85ee2261-operator-scripts\") pod \"nova-cell0-8a6b-account-create-update-stzln\" (UID: \"dd539a7e-17cc-4c2a-a066-fecd85ee2261\") " pod="openstack/nova-cell0-8a6b-account-create-update-stzln" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.673034 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678921d0-cd54-4104-afdd-e6a47489b0e3-operator-scripts\") pod \"nova-cell0-db-create-lg5zn\" (UID: \"678921d0-cd54-4104-afdd-e6a47489b0e3\") " pod="openstack/nova-cell0-db-create-lg5zn" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.684848 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt7lf\" (UniqueName: \"kubernetes.io/projected/678921d0-cd54-4104-afdd-e6a47489b0e3-kube-api-access-xt7lf\") pod \"nova-cell0-db-create-lg5zn\" (UID: \"678921d0-cd54-4104-afdd-e6a47489b0e3\") " pod="openstack/nova-cell0-db-create-lg5zn" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.688953 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqmsw\" (UniqueName: \"kubernetes.io/projected/a8148745-3469-4ca2-a2dd-bc459d1b5eb7-kube-api-access-jqmsw\") pod \"nova-cell1-db-create-b9bx5\" (UID: \"a8148745-3469-4ca2-a2dd-bc459d1b5eb7\") " pod="openstack/nova-cell1-db-create-b9bx5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.689038 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4q8q\" (UniqueName: \"kubernetes.io/projected/dd539a7e-17cc-4c2a-a066-fecd85ee2261-kube-api-access-q4q8q\") pod \"nova-cell0-8a6b-account-create-update-stzln\" (UID: \"dd539a7e-17cc-4c2a-a066-fecd85ee2261\") " pod="openstack/nova-cell0-8a6b-account-create-update-stzln" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.767454 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlnlc\" (UniqueName: \"kubernetes.io/projected/7e834410-f86e-424f-81ac-73de81ffeb25-kube-api-access-mlnlc\") pod \"nova-cell1-250a-account-create-update-q99n5\" (UID: \"7e834410-f86e-424f-81ac-73de81ffeb25\") " pod="openstack/nova-cell1-250a-account-create-update-q99n5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.767596 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e834410-f86e-424f-81ac-73de81ffeb25-operator-scripts\") pod \"nova-cell1-250a-account-create-update-q99n5\" (UID: \"7e834410-f86e-424f-81ac-73de81ffeb25\") " pod="openstack/nova-cell1-250a-account-create-update-q99n5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.768444 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e834410-f86e-424f-81ac-73de81ffeb25-operator-scripts\") pod \"nova-cell1-250a-account-create-update-q99n5\" (UID: \"7e834410-f86e-424f-81ac-73de81ffeb25\") " pod="openstack/nova-cell1-250a-account-create-update-q99n5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.790126 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlnlc\" (UniqueName: \"kubernetes.io/projected/7e834410-f86e-424f-81ac-73de81ffeb25-kube-api-access-mlnlc\") pod \"nova-cell1-250a-account-create-update-q99n5\" (UID: \"7e834410-f86e-424f-81ac-73de81ffeb25\") " pod="openstack/nova-cell1-250a-account-create-update-q99n5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.836755 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lg5zn" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.852201 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a6b-account-create-update-stzln" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.870364 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b9bx5" Feb 23 18:51:33 crc kubenswrapper[4768]: I0223 18:51:33.926711 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-250a-account-create-update-q99n5" Feb 23 18:51:34 crc kubenswrapper[4768]: I0223 18:51:34.448820 4768 generic.go:334] "Generic (PLEG): container finished" podID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerID="f2d9f2ecec84384072a8efa4854b01bbf8cdede0e2444531ec254f0c9d2bc2f4" exitCode=0 Feb 23 18:51:34 crc kubenswrapper[4768]: I0223 18:51:34.449183 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"766c3286-0e91-45d1-81a4-d06fdcf1e8d4","Type":"ContainerDied","Data":"f2d9f2ecec84384072a8efa4854b01bbf8cdede0e2444531ec254f0c9d2bc2f4"} Feb 23 18:51:34 crc kubenswrapper[4768]: W0223 18:51:34.458291 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d5ee44_4e4a_4f31_8104_a72d66f78d72.slice/crio-e2085562370038f2c60d2a7b0cbefcf03834dbbf03f8b31cac11ab131c59d1ec WatchSource:0}: Error finding container e2085562370038f2c60d2a7b0cbefcf03834dbbf03f8b31cac11ab131c59d1ec: Status 404 returned error can't find the container with id e2085562370038f2c60d2a7b0cbefcf03834dbbf03f8b31cac11ab131c59d1ec Feb 23 18:51:34 crc kubenswrapper[4768]: I0223 18:51:34.885055 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:51:34 crc kubenswrapper[4768]: I0223 18:51:34.997845 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-combined-ca-bundle\") pod \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " Feb 23 18:51:34 crc kubenswrapper[4768]: I0223 18:51:34.998386 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-sg-core-conf-yaml\") pod \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " Feb 23 18:51:34 crc kubenswrapper[4768]: I0223 18:51:34.998528 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-scripts\") pod \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " Feb 23 18:51:34 crc kubenswrapper[4768]: I0223 18:51:34.998557 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-run-httpd\") pod \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " Feb 23 18:51:34 crc kubenswrapper[4768]: I0223 18:51:34.998605 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5h4j\" (UniqueName: \"kubernetes.io/projected/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-kube-api-access-j5h4j\") pod \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " Feb 23 18:51:34 crc kubenswrapper[4768]: I0223 18:51:34.998671 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-config-data\") pod \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " Feb 23 18:51:34 crc kubenswrapper[4768]: I0223 18:51:34.998713 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-log-httpd\") pod \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\" (UID: \"766c3286-0e91-45d1-81a4-d06fdcf1e8d4\") " Feb 23 18:51:34 crc kubenswrapper[4768]: I0223 18:51:34.999062 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "766c3286-0e91-45d1-81a4-d06fdcf1e8d4" (UID: "766c3286-0e91-45d1-81a4-d06fdcf1e8d4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:34.999564 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "766c3286-0e91-45d1-81a4-d06fdcf1e8d4" (UID: "766c3286-0e91-45d1-81a4-d06fdcf1e8d4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.005367 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-kube-api-access-j5h4j" (OuterVolumeSpecName: "kube-api-access-j5h4j") pod "766c3286-0e91-45d1-81a4-d06fdcf1e8d4" (UID: "766c3286-0e91-45d1-81a4-d06fdcf1e8d4"). InnerVolumeSpecName "kube-api-access-j5h4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.020632 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-scripts" (OuterVolumeSpecName: "scripts") pod "766c3286-0e91-45d1-81a4-d06fdcf1e8d4" (UID: "766c3286-0e91-45d1-81a4-d06fdcf1e8d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.049605 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "766c3286-0e91-45d1-81a4-d06fdcf1e8d4" (UID: "766c3286-0e91-45d1-81a4-d06fdcf1e8d4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.082703 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-x4hdd"] Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.101186 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.101223 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.101234 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.101243 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5h4j\" (UniqueName: \"kubernetes.io/projected/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-kube-api-access-j5h4j\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.101280 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.182119 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "766c3286-0e91-45d1-81a4-d06fdcf1e8d4" (UID: "766c3286-0e91-45d1-81a4-d06fdcf1e8d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.205961 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.207437 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-config-data" (OuterVolumeSpecName: "config-data") pod "766c3286-0e91-45d1-81a4-d06fdcf1e8d4" (UID: "766c3286-0e91-45d1-81a4-d06fdcf1e8d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.254722 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-b9bx5"] Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.314875 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/766c3286-0e91-45d1-81a4-d06fdcf1e8d4-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.480919 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"766c3286-0e91-45d1-81a4-d06fdcf1e8d4","Type":"ContainerDied","Data":"9ffe62cd69e989317026b2326e93d566bbf0a9ce7c41c789dbd3745a6867ae30"} Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.481384 4768 scope.go:117] "RemoveContainer" containerID="e912c246592de399cac7c5dab58f7249daf90eb63e35ae05d11db8bf2726b7d1" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.481779 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.487103 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x4hdd" event={"ID":"124ee684-6570-4e6c-856b-516e1b2f793a","Type":"ContainerStarted","Data":"2701fc3cd55d0ab15a1e723319da8293ded5ed86fee531366911061601abd1a9"} Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.487142 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x4hdd" event={"ID":"124ee684-6570-4e6c-856b-516e1b2f793a","Type":"ContainerStarted","Data":"e287180c5c4defbe0cdaf4c9928f0175c31a2d0f130b401d004410ddd9b43aa4"} Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.497918 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" event={"ID":"70d5ee44-4e4a-4f31-8104-a72d66f78d72","Type":"ContainerStarted","Data":"52295ca66021b395ca0a9443ca2e74d9708f5f124445b07cb858faf3604dbd39"} Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.497986 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" event={"ID":"70d5ee44-4e4a-4f31-8104-a72d66f78d72","Type":"ContainerStarted","Data":"9dcfb3b3ebd32568c71861e68837c4ee5f11e735dc023228b3e0af29063a86b9"} Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.497999 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" event={"ID":"70d5ee44-4e4a-4f31-8104-a72d66f78d72","Type":"ContainerStarted","Data":"e2085562370038f2c60d2a7b0cbefcf03834dbbf03f8b31cac11ab131c59d1ec"} Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.498531 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.498679 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.504950 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b9bx5" event={"ID":"a8148745-3469-4ca2-a2dd-bc459d1b5eb7","Type":"ContainerStarted","Data":"ec0792e45078dbce1dce8cfe4041356566789942714a46e70414ff08f98f26fd"} Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.540133 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-x4hdd" podStartSLOduration=2.540108681 podStartE2EDuration="2.540108681s" podCreationTimestamp="2026-02-23 18:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:35.525641545 +0000 UTC m=+1090.916127335" watchObservedRunningTime="2026-02-23 18:51:35.540108681 +0000 UTC m=+1090.930594481" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.550880 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7fa93987-e84a-4fa8-97ab-4df24aabb201","Type":"ContainerStarted","Data":"4652a6a98faca926c8f112cb8fb526a0f354c2b9655ce2ed3adc2f02ff5d98c6"} Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.585266 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" podStartSLOduration=8.585226588 podStartE2EDuration="8.585226588s" podCreationTimestamp="2026-02-23 18:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:35.547999438 +0000 UTC m=+1090.938485238" watchObservedRunningTime="2026-02-23 18:51:35.585226588 +0000 UTC m=+1090.975712388" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.594198 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.982943554 podStartE2EDuration="13.594173553s" podCreationTimestamp="2026-02-23 18:51:22 +0000 UTC" firstStartedPulling="2026-02-23 18:51:24.085079144 +0000 UTC m=+1079.475564944" lastFinishedPulling="2026-02-23 18:51:34.696309153 +0000 UTC m=+1090.086794943" observedRunningTime="2026-02-23 18:51:35.579772048 +0000 UTC m=+1090.970257848" watchObservedRunningTime="2026-02-23 18:51:35.594173553 +0000 UTC m=+1090.984659353" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.606425 4768 scope.go:117] "RemoveContainer" containerID="756e9e3a604fd8311db65771801ca231fa0f51612c1a6e667808ab1788bd6a08" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.629054 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.641367 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.655018 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-lg5zn"] Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.659077 4768 scope.go:117] "RemoveContainer" containerID="f2d9f2ecec84384072a8efa4854b01bbf8cdede0e2444531ec254f0c9d2bc2f4" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.662814 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:35 crc kubenswrapper[4768]: E0223 18:51:35.663146 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="proxy-httpd" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.663160 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="proxy-httpd" Feb 23 18:51:35 crc kubenswrapper[4768]: E0223 18:51:35.663174 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="sg-core" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.663180 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="sg-core" Feb 23 18:51:35 crc kubenswrapper[4768]: E0223 18:51:35.663200 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="ceilometer-central-agent" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.663205 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="ceilometer-central-agent" Feb 23 18:51:35 crc kubenswrapper[4768]: E0223 18:51:35.663212 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="ceilometer-notification-agent" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.663218 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="ceilometer-notification-agent" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.663863 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="sg-core" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.663880 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="ceilometer-notification-agent" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.663893 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="ceilometer-central-agent" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.663904 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" containerName="proxy-httpd" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.665465 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.668406 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.671504 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.714149 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.735641 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-scripts\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.735724 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-config-data\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.735749 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.735771 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.735822 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4adfc2e-38dd-497c-9a24-632365dc96ea-log-httpd\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.735843 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4adfc2e-38dd-497c-9a24-632365dc96ea-run-httpd\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.735875 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whmbg\" (UniqueName: \"kubernetes.io/projected/b4adfc2e-38dd-497c-9a24-632365dc96ea-kube-api-access-whmbg\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.745719 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8a6b-account-create-update-stzln"] Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.761015 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-250a-account-create-update-q99n5"] Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.769184 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a8c9-account-create-update-vhvc2"] Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.779530 4768 scope.go:117] "RemoveContainer" containerID="0074e7e3e52af085dabc712b9f23cb2c5260943006e063e55ddb5d8268252469" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.841216 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-scripts\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.841359 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-config-data\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.841402 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.841432 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.841533 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4adfc2e-38dd-497c-9a24-632365dc96ea-log-httpd\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.841573 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4adfc2e-38dd-497c-9a24-632365dc96ea-run-httpd\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.841620 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whmbg\" (UniqueName: \"kubernetes.io/projected/b4adfc2e-38dd-497c-9a24-632365dc96ea-kube-api-access-whmbg\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.843199 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4adfc2e-38dd-497c-9a24-632365dc96ea-log-httpd\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.843553 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4adfc2e-38dd-497c-9a24-632365dc96ea-run-httpd\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.851354 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-config-data\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.855098 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.855516 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.860283 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-scripts\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.875697 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whmbg\" (UniqueName: \"kubernetes.io/projected/b4adfc2e-38dd-497c-9a24-632365dc96ea-kube-api-access-whmbg\") pod \"ceilometer-0\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " pod="openstack/ceilometer-0" Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.953545 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.954085 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e2a65a3f-ebd4-46e8-89bb-b402f6c91882" containerName="glance-log" containerID="cri-o://15c1614d36dd8ecc78ee7f1c94b97cc778d50e7bb764aacf93e5f08ca2d1cfad" gracePeriod=30 Feb 23 18:51:35 crc kubenswrapper[4768]: I0223 18:51:35.954491 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e2a65a3f-ebd4-46e8-89bb-b402f6c91882" containerName="glance-httpd" containerID="cri-o://53265b5944dd1c60e19002000fb3ea2ae2d8f8c9077e7fd5f53c3dee03a143c3" gracePeriod=30 Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.067233 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.551912 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-546cfc7689-gsp5x" Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.573490 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a6b-account-create-update-stzln" event={"ID":"dd539a7e-17cc-4c2a-a066-fecd85ee2261","Type":"ContainerStarted","Data":"6e2a9a01e2373d545c697c8ecbf49389affe09926d20b721eab252697fa75b48"} Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.573548 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a6b-account-create-update-stzln" event={"ID":"dd539a7e-17cc-4c2a-a066-fecd85ee2261","Type":"ContainerStarted","Data":"0370f0659eb91fe1c1c45a8662592d152b393ec5bade11e205a208c243b52630"} Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.585078 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a8c9-account-create-update-vhvc2" event={"ID":"92c8735b-69ac-497c-8c20-08580587d926","Type":"ContainerStarted","Data":"6f2ab964c3605681eacec273cc5ac72a134a8dd059ac2635eca48e192b509100"} Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.585428 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a8c9-account-create-update-vhvc2" event={"ID":"92c8735b-69ac-497c-8c20-08580587d926","Type":"ContainerStarted","Data":"132f7e1d3877160e556321af5438e581047c4f5c6afc0181ab12d21590e6ba59"} Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.593516 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lg5zn" event={"ID":"678921d0-cd54-4104-afdd-e6a47489b0e3","Type":"ContainerStarted","Data":"25845eb73b91af06a482306cbbbf8bde0c72180f7b997a4615f2608c37e93607"} Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.594428 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lg5zn" event={"ID":"678921d0-cd54-4104-afdd-e6a47489b0e3","Type":"ContainerStarted","Data":"63f1290f3f524d0a4eef01ba3064b451ef255c738b3addc3045062a04d981e42"} Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.603419 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-8a6b-account-create-update-stzln" podStartSLOduration=3.603396555 podStartE2EDuration="3.603396555s" podCreationTimestamp="2026-02-23 18:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:36.595171419 +0000 UTC m=+1091.985657219" watchObservedRunningTime="2026-02-23 18:51:36.603396555 +0000 UTC m=+1091.993882355" Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.608358 4768 generic.go:334] "Generic (PLEG): container finished" podID="a8148745-3469-4ca2-a2dd-bc459d1b5eb7" containerID="3b40f18aa1b8f59f4050e4daa0594072afca963b69b76b7c2818b7919e7be8b9" exitCode=0 Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.608695 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b9bx5" event={"ID":"a8148745-3469-4ca2-a2dd-bc459d1b5eb7","Type":"ContainerDied","Data":"3b40f18aa1b8f59f4050e4daa0594072afca963b69b76b7c2818b7919e7be8b9"} Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.620505 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-lg5zn" podStartSLOduration=3.620480143 podStartE2EDuration="3.620480143s" podCreationTimestamp="2026-02-23 18:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:36.613505562 +0000 UTC m=+1092.003991362" watchObservedRunningTime="2026-02-23 18:51:36.620480143 +0000 UTC m=+1092.010965953" Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.629591 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-250a-account-create-update-q99n5" event={"ID":"7e834410-f86e-424f-81ac-73de81ffeb25","Type":"ContainerStarted","Data":"d224c7cdfadcf1640ca1baf851c28c8981c4a22adee2a39159a4cb0ad408cef6"} Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.629632 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-250a-account-create-update-q99n5" event={"ID":"7e834410-f86e-424f-81ac-73de81ffeb25","Type":"ContainerStarted","Data":"174aaf6ec64cf400a30b191b62afa9f961a240260f8398277e2d7414bfef434f"} Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.649487 4768 generic.go:334] "Generic (PLEG): container finished" podID="124ee684-6570-4e6c-856b-516e1b2f793a" containerID="2701fc3cd55d0ab15a1e723319da8293ded5ed86fee531366911061601abd1a9" exitCode=0 Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.649701 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66d66bdc85-82928"] Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.649731 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x4hdd" event={"ID":"124ee684-6570-4e6c-856b-516e1b2f793a","Type":"ContainerDied","Data":"2701fc3cd55d0ab15a1e723319da8293ded5ed86fee531366911061601abd1a9"} Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.649919 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66d66bdc85-82928" podUID="ef28ba99-309b-4f67-bf0a-e9e22e3808db" containerName="neutron-api" containerID="cri-o://7d033198bb76fc4edd1bfd0d2bffdf9886d003f17371d30cbaffd3efdd8b90a3" gracePeriod=30 Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.650019 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66d66bdc85-82928" podUID="ef28ba99-309b-4f67-bf0a-e9e22e3808db" containerName="neutron-httpd" containerID="cri-o://4f6acda68047a479fd590a9e5cd03989179226b33f7b202e33f8e5fae69fcfaf" gracePeriod=30 Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.676727 4768 generic.go:334] "Generic (PLEG): container finished" podID="e2a65a3f-ebd4-46e8-89bb-b402f6c91882" containerID="15c1614d36dd8ecc78ee7f1c94b97cc778d50e7bb764aacf93e5f08ca2d1cfad" exitCode=143 Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.677647 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2a65a3f-ebd4-46e8-89bb-b402f6c91882","Type":"ContainerDied","Data":"15c1614d36dd8ecc78ee7f1c94b97cc778d50e7bb764aacf93e5f08ca2d1cfad"} Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.701462 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-a8c9-account-create-update-vhvc2" podStartSLOduration=3.701433632 podStartE2EDuration="3.701433632s" podCreationTimestamp="2026-02-23 18:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:36.641775017 +0000 UTC m=+1092.032260817" watchObservedRunningTime="2026-02-23 18:51:36.701433632 +0000 UTC m=+1092.091919432" Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.713725 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-250a-account-create-update-q99n5" podStartSLOduration=3.713705409 podStartE2EDuration="3.713705409s" podCreationTimestamp="2026-02-23 18:51:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:36.661727314 +0000 UTC m=+1092.052213134" watchObservedRunningTime="2026-02-23 18:51:36.713705409 +0000 UTC m=+1092.104191209" Feb 23 18:51:36 crc kubenswrapper[4768]: I0223 18:51:36.819716 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:36 crc kubenswrapper[4768]: W0223 18:51:36.878630 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4adfc2e_38dd_497c_9a24_632365dc96ea.slice/crio-87abd56f5f576f22c025e3e045d9c8cf77b1f669e31600c43606fab68b07caf0 WatchSource:0}: Error finding container 87abd56f5f576f22c025e3e045d9c8cf77b1f669e31600c43606fab68b07caf0: Status 404 returned error can't find the container with id 87abd56f5f576f22c025e3e045d9c8cf77b1f669e31600c43606fab68b07caf0 Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.331118 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="766c3286-0e91-45d1-81a4-d06fdcf1e8d4" path="/var/lib/kubelet/pods/766c3286-0e91-45d1-81a4-d06fdcf1e8d4/volumes" Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.688415 4768 generic.go:334] "Generic (PLEG): container finished" podID="678921d0-cd54-4104-afdd-e6a47489b0e3" containerID="25845eb73b91af06a482306cbbbf8bde0c72180f7b997a4615f2608c37e93607" exitCode=0 Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.688504 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lg5zn" event={"ID":"678921d0-cd54-4104-afdd-e6a47489b0e3","Type":"ContainerDied","Data":"25845eb73b91af06a482306cbbbf8bde0c72180f7b997a4615f2608c37e93607"} Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.690778 4768 generic.go:334] "Generic (PLEG): container finished" podID="dd539a7e-17cc-4c2a-a066-fecd85ee2261" containerID="6e2a9a01e2373d545c697c8ecbf49389affe09926d20b721eab252697fa75b48" exitCode=0 Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.690864 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a6b-account-create-update-stzln" event={"ID":"dd539a7e-17cc-4c2a-a066-fecd85ee2261","Type":"ContainerDied","Data":"6e2a9a01e2373d545c697c8ecbf49389affe09926d20b721eab252697fa75b48"} Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.692734 4768 generic.go:334] "Generic (PLEG): container finished" podID="ef28ba99-309b-4f67-bf0a-e9e22e3808db" containerID="4f6acda68047a479fd590a9e5cd03989179226b33f7b202e33f8e5fae69fcfaf" exitCode=0 Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.692792 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66d66bdc85-82928" event={"ID":"ef28ba99-309b-4f67-bf0a-e9e22e3808db","Type":"ContainerDied","Data":"4f6acda68047a479fd590a9e5cd03989179226b33f7b202e33f8e5fae69fcfaf"} Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.694458 4768 generic.go:334] "Generic (PLEG): container finished" podID="7e834410-f86e-424f-81ac-73de81ffeb25" containerID="d224c7cdfadcf1640ca1baf851c28c8981c4a22adee2a39159a4cb0ad408cef6" exitCode=0 Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.694624 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-250a-account-create-update-q99n5" event={"ID":"7e834410-f86e-424f-81ac-73de81ffeb25","Type":"ContainerDied","Data":"d224c7cdfadcf1640ca1baf851c28c8981c4a22adee2a39159a4cb0ad408cef6"} Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.696154 4768 generic.go:334] "Generic (PLEG): container finished" podID="92c8735b-69ac-497c-8c20-08580587d926" containerID="6f2ab964c3605681eacec273cc5ac72a134a8dd059ac2635eca48e192b509100" exitCode=0 Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.696277 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a8c9-account-create-update-vhvc2" event={"ID":"92c8735b-69ac-497c-8c20-08580587d926","Type":"ContainerDied","Data":"6f2ab964c3605681eacec273cc5ac72a134a8dd059ac2635eca48e192b509100"} Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.698975 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4adfc2e-38dd-497c-9a24-632365dc96ea","Type":"ContainerStarted","Data":"a9bbb315128aaf6897f425eed34a0d220e409a67ad3ce72ccb43f1d6232abc37"} Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.699033 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4adfc2e-38dd-497c-9a24-632365dc96ea","Type":"ContainerStarted","Data":"87abd56f5f576f22c025e3e045d9c8cf77b1f669e31600c43606fab68b07caf0"} Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.701144 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.701446 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a00c3dcd-826d-486b-9879-6e45d61a9907" containerName="glance-log" containerID="cri-o://0b9a843ce48640fccbf8a6635c196c4a04c6f208b993f259909773d8e86e12eb" gracePeriod=30 Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.701528 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a00c3dcd-826d-486b-9879-6e45d61a9907" containerName="glance-httpd" containerID="cri-o://5404e02a66012b06ee00958e7e45bd26d45742029ca64e48b02155807c02ad62" gracePeriod=30 Feb 23 18:51:37 crc kubenswrapper[4768]: I0223 18:51:37.826508 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.264286 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b9bx5" Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.271177 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x4hdd" Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.335395 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqmsw\" (UniqueName: \"kubernetes.io/projected/a8148745-3469-4ca2-a2dd-bc459d1b5eb7-kube-api-access-jqmsw\") pod \"a8148745-3469-4ca2-a2dd-bc459d1b5eb7\" (UID: \"a8148745-3469-4ca2-a2dd-bc459d1b5eb7\") " Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.335598 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjtms\" (UniqueName: \"kubernetes.io/projected/124ee684-6570-4e6c-856b-516e1b2f793a-kube-api-access-gjtms\") pod \"124ee684-6570-4e6c-856b-516e1b2f793a\" (UID: \"124ee684-6570-4e6c-856b-516e1b2f793a\") " Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.335902 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124ee684-6570-4e6c-856b-516e1b2f793a-operator-scripts\") pod \"124ee684-6570-4e6c-856b-516e1b2f793a\" (UID: \"124ee684-6570-4e6c-856b-516e1b2f793a\") " Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.335949 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8148745-3469-4ca2-a2dd-bc459d1b5eb7-operator-scripts\") pod \"a8148745-3469-4ca2-a2dd-bc459d1b5eb7\" (UID: \"a8148745-3469-4ca2-a2dd-bc459d1b5eb7\") " Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.337572 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8148745-3469-4ca2-a2dd-bc459d1b5eb7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8148745-3469-4ca2-a2dd-bc459d1b5eb7" (UID: "a8148745-3469-4ca2-a2dd-bc459d1b5eb7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.338057 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/124ee684-6570-4e6c-856b-516e1b2f793a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "124ee684-6570-4e6c-856b-516e1b2f793a" (UID: "124ee684-6570-4e6c-856b-516e1b2f793a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.347387 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124ee684-6570-4e6c-856b-516e1b2f793a-kube-api-access-gjtms" (OuterVolumeSpecName: "kube-api-access-gjtms") pod "124ee684-6570-4e6c-856b-516e1b2f793a" (UID: "124ee684-6570-4e6c-856b-516e1b2f793a"). InnerVolumeSpecName "kube-api-access-gjtms". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.348521 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8148745-3469-4ca2-a2dd-bc459d1b5eb7-kube-api-access-jqmsw" (OuterVolumeSpecName: "kube-api-access-jqmsw") pod "a8148745-3469-4ca2-a2dd-bc459d1b5eb7" (UID: "a8148745-3469-4ca2-a2dd-bc459d1b5eb7"). InnerVolumeSpecName "kube-api-access-jqmsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.438071 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjtms\" (UniqueName: \"kubernetes.io/projected/124ee684-6570-4e6c-856b-516e1b2f793a-kube-api-access-gjtms\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.438111 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/124ee684-6570-4e6c-856b-516e1b2f793a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.438120 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8148745-3469-4ca2-a2dd-bc459d1b5eb7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.438130 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqmsw\" (UniqueName: \"kubernetes.io/projected/a8148745-3469-4ca2-a2dd-bc459d1b5eb7-kube-api-access-jqmsw\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.710836 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x4hdd" event={"ID":"124ee684-6570-4e6c-856b-516e1b2f793a","Type":"ContainerDied","Data":"e287180c5c4defbe0cdaf4c9928f0175c31a2d0f130b401d004410ddd9b43aa4"} Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.710904 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e287180c5c4defbe0cdaf4c9928f0175c31a2d0f130b401d004410ddd9b43aa4" Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.710854 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x4hdd" Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.716397 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4adfc2e-38dd-497c-9a24-632365dc96ea","Type":"ContainerStarted","Data":"19cd0ed487fa211e10e30af708999bf4ec2efdd21599154f885f6c4ba63de81c"} Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.722854 4768 generic.go:334] "Generic (PLEG): container finished" podID="a00c3dcd-826d-486b-9879-6e45d61a9907" containerID="0b9a843ce48640fccbf8a6635c196c4a04c6f208b993f259909773d8e86e12eb" exitCode=143 Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.722937 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a00c3dcd-826d-486b-9879-6e45d61a9907","Type":"ContainerDied","Data":"0b9a843ce48640fccbf8a6635c196c4a04c6f208b993f259909773d8e86e12eb"} Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.726609 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-b9bx5" event={"ID":"a8148745-3469-4ca2-a2dd-bc459d1b5eb7","Type":"ContainerDied","Data":"ec0792e45078dbce1dce8cfe4041356566789942714a46e70414ff08f98f26fd"} Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.726647 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec0792e45078dbce1dce8cfe4041356566789942714a46e70414ff08f98f26fd" Feb 23 18:51:38 crc kubenswrapper[4768]: I0223 18:51:38.726667 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-b9bx5" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.182971 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a6b-account-create-update-stzln" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.260919 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4q8q\" (UniqueName: \"kubernetes.io/projected/dd539a7e-17cc-4c2a-a066-fecd85ee2261-kube-api-access-q4q8q\") pod \"dd539a7e-17cc-4c2a-a066-fecd85ee2261\" (UID: \"dd539a7e-17cc-4c2a-a066-fecd85ee2261\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.261086 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd539a7e-17cc-4c2a-a066-fecd85ee2261-operator-scripts\") pod \"dd539a7e-17cc-4c2a-a066-fecd85ee2261\" (UID: \"dd539a7e-17cc-4c2a-a066-fecd85ee2261\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.261741 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd539a7e-17cc-4c2a-a066-fecd85ee2261-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd539a7e-17cc-4c2a-a066-fecd85ee2261" (UID: "dd539a7e-17cc-4c2a-a066-fecd85ee2261"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.267314 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd539a7e-17cc-4c2a-a066-fecd85ee2261-kube-api-access-q4q8q" (OuterVolumeSpecName: "kube-api-access-q4q8q") pod "dd539a7e-17cc-4c2a-a066-fecd85ee2261" (UID: "dd539a7e-17cc-4c2a-a066-fecd85ee2261"). InnerVolumeSpecName "kube-api-access-q4q8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.346633 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-250a-account-create-update-q99n5" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.347506 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lg5zn" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.362871 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4q8q\" (UniqueName: \"kubernetes.io/projected/dd539a7e-17cc-4c2a-a066-fecd85ee2261-kube-api-access-q4q8q\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.363096 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd539a7e-17cc-4c2a-a066-fecd85ee2261-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.394484 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a8c9-account-create-update-vhvc2" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.478165 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678921d0-cd54-4104-afdd-e6a47489b0e3-operator-scripts\") pod \"678921d0-cd54-4104-afdd-e6a47489b0e3\" (UID: \"678921d0-cd54-4104-afdd-e6a47489b0e3\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.478469 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/678921d0-cd54-4104-afdd-e6a47489b0e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "678921d0-cd54-4104-afdd-e6a47489b0e3" (UID: "678921d0-cd54-4104-afdd-e6a47489b0e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.478533 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c8735b-69ac-497c-8c20-08580587d926-operator-scripts\") pod \"92c8735b-69ac-497c-8c20-08580587d926\" (UID: \"92c8735b-69ac-497c-8c20-08580587d926\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.478766 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlnlc\" (UniqueName: \"kubernetes.io/projected/7e834410-f86e-424f-81ac-73de81ffeb25-kube-api-access-mlnlc\") pod \"7e834410-f86e-424f-81ac-73de81ffeb25\" (UID: \"7e834410-f86e-424f-81ac-73de81ffeb25\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.478848 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt7lf\" (UniqueName: \"kubernetes.io/projected/678921d0-cd54-4104-afdd-e6a47489b0e3-kube-api-access-xt7lf\") pod \"678921d0-cd54-4104-afdd-e6a47489b0e3\" (UID: \"678921d0-cd54-4104-afdd-e6a47489b0e3\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.478898 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92c8735b-69ac-497c-8c20-08580587d926-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92c8735b-69ac-497c-8c20-08580587d926" (UID: "92c8735b-69ac-497c-8c20-08580587d926"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.478963 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdb7r\" (UniqueName: \"kubernetes.io/projected/92c8735b-69ac-497c-8c20-08580587d926-kube-api-access-mdb7r\") pod \"92c8735b-69ac-497c-8c20-08580587d926\" (UID: \"92c8735b-69ac-497c-8c20-08580587d926\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.479012 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e834410-f86e-424f-81ac-73de81ffeb25-operator-scripts\") pod \"7e834410-f86e-424f-81ac-73de81ffeb25\" (UID: \"7e834410-f86e-424f-81ac-73de81ffeb25\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.479941 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e834410-f86e-424f-81ac-73de81ffeb25-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e834410-f86e-424f-81ac-73de81ffeb25" (UID: "7e834410-f86e-424f-81ac-73de81ffeb25"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.481588 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e834410-f86e-424f-81ac-73de81ffeb25-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.481616 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/678921d0-cd54-4104-afdd-e6a47489b0e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.481627 4768 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92c8735b-69ac-497c-8c20-08580587d926-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.484601 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e834410-f86e-424f-81ac-73de81ffeb25-kube-api-access-mlnlc" (OuterVolumeSpecName: "kube-api-access-mlnlc") pod "7e834410-f86e-424f-81ac-73de81ffeb25" (UID: "7e834410-f86e-424f-81ac-73de81ffeb25"). InnerVolumeSpecName "kube-api-access-mlnlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.487042 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c8735b-69ac-497c-8c20-08580587d926-kube-api-access-mdb7r" (OuterVolumeSpecName: "kube-api-access-mdb7r") pod "92c8735b-69ac-497c-8c20-08580587d926" (UID: "92c8735b-69ac-497c-8c20-08580587d926"). InnerVolumeSpecName "kube-api-access-mdb7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.494130 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678921d0-cd54-4104-afdd-e6a47489b0e3-kube-api-access-xt7lf" (OuterVolumeSpecName: "kube-api-access-xt7lf") pod "678921d0-cd54-4104-afdd-e6a47489b0e3" (UID: "678921d0-cd54-4104-afdd-e6a47489b0e3"). InnerVolumeSpecName "kube-api-access-xt7lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.583535 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdb7r\" (UniqueName: \"kubernetes.io/projected/92c8735b-69ac-497c-8c20-08580587d926-kube-api-access-mdb7r\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.583579 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlnlc\" (UniqueName: \"kubernetes.io/projected/7e834410-f86e-424f-81ac-73de81ffeb25-kube-api-access-mlnlc\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.583589 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt7lf\" (UniqueName: \"kubernetes.io/projected/678921d0-cd54-4104-afdd-e6a47489b0e3-kube-api-access-xt7lf\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.712454 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.738387 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-lg5zn" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.738386 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-lg5zn" event={"ID":"678921d0-cd54-4104-afdd-e6a47489b0e3","Type":"ContainerDied","Data":"63f1290f3f524d0a4eef01ba3064b451ef255c738b3addc3045062a04d981e42"} Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.739008 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63f1290f3f524d0a4eef01ba3064b451ef255c738b3addc3045062a04d981e42" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.746630 4768 generic.go:334] "Generic (PLEG): container finished" podID="e2a65a3f-ebd4-46e8-89bb-b402f6c91882" containerID="53265b5944dd1c60e19002000fb3ea2ae2d8f8c9077e7fd5f53c3dee03a143c3" exitCode=0 Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.746863 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.746875 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2a65a3f-ebd4-46e8-89bb-b402f6c91882","Type":"ContainerDied","Data":"53265b5944dd1c60e19002000fb3ea2ae2d8f8c9077e7fd5f53c3dee03a143c3"} Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.747045 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2a65a3f-ebd4-46e8-89bb-b402f6c91882","Type":"ContainerDied","Data":"0d84735f71b9a31f0d68faf09e7f907a55a352501910b0bbf4d68bc4fc5d4512"} Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.747118 4768 scope.go:117] "RemoveContainer" containerID="53265b5944dd1c60e19002000fb3ea2ae2d8f8c9077e7fd5f53c3dee03a143c3" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.750007 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8a6b-account-create-update-stzln" event={"ID":"dd539a7e-17cc-4c2a-a066-fecd85ee2261","Type":"ContainerDied","Data":"0370f0659eb91fe1c1c45a8662592d152b393ec5bade11e205a208c243b52630"} Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.750048 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0370f0659eb91fe1c1c45a8662592d152b393ec5bade11e205a208c243b52630" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.750135 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8a6b-account-create-update-stzln" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.756097 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a8c9-account-create-update-vhvc2" event={"ID":"92c8735b-69ac-497c-8c20-08580587d926","Type":"ContainerDied","Data":"132f7e1d3877160e556321af5438e581047c4f5c6afc0181ab12d21590e6ba59"} Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.756141 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="132f7e1d3877160e556321af5438e581047c4f5c6afc0181ab12d21590e6ba59" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.756203 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a8c9-account-create-update-vhvc2" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.765997 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-250a-account-create-update-q99n5" event={"ID":"7e834410-f86e-424f-81ac-73de81ffeb25","Type":"ContainerDied","Data":"174aaf6ec64cf400a30b191b62afa9f961a240260f8398277e2d7414bfef434f"} Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.766038 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="174aaf6ec64cf400a30b191b62afa9f961a240260f8398277e2d7414bfef434f" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.766204 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-250a-account-create-update-q99n5" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.786163 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.786327 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-logs\") pod \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.786375 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vmfq\" (UniqueName: \"kubernetes.io/projected/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-kube-api-access-7vmfq\") pod \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.786435 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-public-tls-certs\") pod \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.787345 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-combined-ca-bundle\") pod \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.787487 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-httpd-run\") pod \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.787509 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-config-data\") pod \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.787536 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-scripts\") pod \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.787785 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4adfc2e-38dd-497c-9a24-632365dc96ea","Type":"ContainerStarted","Data":"b35bb1b7f7170ab04d4178551b510bbd7ade3f3e63d9a6714f4c1df1574985e3"} Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.789296 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e2a65a3f-ebd4-46e8-89bb-b402f6c91882" (UID: "e2a65a3f-ebd4-46e8-89bb-b402f6c91882"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.792120 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-logs" (OuterVolumeSpecName: "logs") pod "e2a65a3f-ebd4-46e8-89bb-b402f6c91882" (UID: "e2a65a3f-ebd4-46e8-89bb-b402f6c91882"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.810827 4768 scope.go:117] "RemoveContainer" containerID="15c1614d36dd8ecc78ee7f1c94b97cc778d50e7bb764aacf93e5f08ca2d1cfad" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.810876 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-kube-api-access-7vmfq" (OuterVolumeSpecName: "kube-api-access-7vmfq") pod "e2a65a3f-ebd4-46e8-89bb-b402f6c91882" (UID: "e2a65a3f-ebd4-46e8-89bb-b402f6c91882"). InnerVolumeSpecName "kube-api-access-7vmfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.811024 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-scripts" (OuterVolumeSpecName: "scripts") pod "e2a65a3f-ebd4-46e8-89bb-b402f6c91882" (UID: "e2a65a3f-ebd4-46e8-89bb-b402f6c91882"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.819376 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "e2a65a3f-ebd4-46e8-89bb-b402f6c91882" (UID: "e2a65a3f-ebd4-46e8-89bb-b402f6c91882"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.828341 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2a65a3f-ebd4-46e8-89bb-b402f6c91882" (UID: "e2a65a3f-ebd4-46e8-89bb-b402f6c91882"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: E0223 18:51:39.850674 4768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-public-tls-certs podName:e2a65a3f-ebd4-46e8-89bb-b402f6c91882 nodeName:}" failed. No retries permitted until 2026-02-23 18:51:40.350640131 +0000 UTC m=+1095.741125931 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "public-tls-certs" (UniqueName: "kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-public-tls-certs") pod "e2a65a3f-ebd4-46e8-89bb-b402f6c91882" (UID: "e2a65a3f-ebd4-46e8-89bb-b402f6c91882") : error deleting /var/lib/kubelet/pods/e2a65a3f-ebd4-46e8-89bb-b402f6c91882/volume-subpaths: remove /var/lib/kubelet/pods/e2a65a3f-ebd4-46e8-89bb-b402f6c91882/volume-subpaths: no such file or directory Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.853487 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-config-data" (OuterVolumeSpecName: "config-data") pod "e2a65a3f-ebd4-46e8-89bb-b402f6c91882" (UID: "e2a65a3f-ebd4-46e8-89bb-b402f6c91882"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.890563 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.891898 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.891910 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vmfq\" (UniqueName: \"kubernetes.io/projected/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-kube-api-access-7vmfq\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.891925 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.892058 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.892069 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.892077 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.910464 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.972451 4768 scope.go:117] "RemoveContainer" containerID="53265b5944dd1c60e19002000fb3ea2ae2d8f8c9077e7fd5f53c3dee03a143c3" Feb 23 18:51:39 crc kubenswrapper[4768]: E0223 18:51:39.972930 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53265b5944dd1c60e19002000fb3ea2ae2d8f8c9077e7fd5f53c3dee03a143c3\": container with ID starting with 53265b5944dd1c60e19002000fb3ea2ae2d8f8c9077e7fd5f53c3dee03a143c3 not found: ID does not exist" containerID="53265b5944dd1c60e19002000fb3ea2ae2d8f8c9077e7fd5f53c3dee03a143c3" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.972961 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53265b5944dd1c60e19002000fb3ea2ae2d8f8c9077e7fd5f53c3dee03a143c3"} err="failed to get container status \"53265b5944dd1c60e19002000fb3ea2ae2d8f8c9077e7fd5f53c3dee03a143c3\": rpc error: code = NotFound desc = could not find container \"53265b5944dd1c60e19002000fb3ea2ae2d8f8c9077e7fd5f53c3dee03a143c3\": container with ID starting with 53265b5944dd1c60e19002000fb3ea2ae2d8f8c9077e7fd5f53c3dee03a143c3 not found: ID does not exist" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.972982 4768 scope.go:117] "RemoveContainer" containerID="15c1614d36dd8ecc78ee7f1c94b97cc778d50e7bb764aacf93e5f08ca2d1cfad" Feb 23 18:51:39 crc kubenswrapper[4768]: E0223 18:51:39.973222 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15c1614d36dd8ecc78ee7f1c94b97cc778d50e7bb764aacf93e5f08ca2d1cfad\": container with ID starting with 15c1614d36dd8ecc78ee7f1c94b97cc778d50e7bb764aacf93e5f08ca2d1cfad not found: ID does not exist" containerID="15c1614d36dd8ecc78ee7f1c94b97cc778d50e7bb764aacf93e5f08ca2d1cfad" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.973275 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15c1614d36dd8ecc78ee7f1c94b97cc778d50e7bb764aacf93e5f08ca2d1cfad"} err="failed to get container status \"15c1614d36dd8ecc78ee7f1c94b97cc778d50e7bb764aacf93e5f08ca2d1cfad\": rpc error: code = NotFound desc = could not find container \"15c1614d36dd8ecc78ee7f1c94b97cc778d50e7bb764aacf93e5f08ca2d1cfad\": container with ID starting with 15c1614d36dd8ecc78ee7f1c94b97cc778d50e7bb764aacf93e5f08ca2d1cfad not found: ID does not exist" Feb 23 18:51:39 crc kubenswrapper[4768]: I0223 18:51:39.994777 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.401653 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-public-tls-certs\") pod \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\" (UID: \"e2a65a3f-ebd4-46e8-89bb-b402f6c91882\") " Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.407004 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e2a65a3f-ebd4-46e8-89bb-b402f6c91882" (UID: "e2a65a3f-ebd4-46e8-89bb-b402f6c91882"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.504546 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a65a3f-ebd4-46e8-89bb-b402f6c91882-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.684122 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.697685 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.718893 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-84699c9d66-ghjfn" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.149:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.149:8443: connect: connection refused" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.737681 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:51:40 crc kubenswrapper[4768]: E0223 18:51:40.738119 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678921d0-cd54-4104-afdd-e6a47489b0e3" containerName="mariadb-database-create" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738137 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="678921d0-cd54-4104-afdd-e6a47489b0e3" containerName="mariadb-database-create" Feb 23 18:51:40 crc kubenswrapper[4768]: E0223 18:51:40.738150 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124ee684-6570-4e6c-856b-516e1b2f793a" containerName="mariadb-database-create" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738236 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="124ee684-6570-4e6c-856b-516e1b2f793a" containerName="mariadb-database-create" Feb 23 18:51:40 crc kubenswrapper[4768]: E0223 18:51:40.738304 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e834410-f86e-424f-81ac-73de81ffeb25" containerName="mariadb-account-create-update" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738315 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e834410-f86e-424f-81ac-73de81ffeb25" containerName="mariadb-account-create-update" Feb 23 18:51:40 crc kubenswrapper[4768]: E0223 18:51:40.738328 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a65a3f-ebd4-46e8-89bb-b402f6c91882" containerName="glance-log" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738335 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a65a3f-ebd4-46e8-89bb-b402f6c91882" containerName="glance-log" Feb 23 18:51:40 crc kubenswrapper[4768]: E0223 18:51:40.738353 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a65a3f-ebd4-46e8-89bb-b402f6c91882" containerName="glance-httpd" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738359 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a65a3f-ebd4-46e8-89bb-b402f6c91882" containerName="glance-httpd" Feb 23 18:51:40 crc kubenswrapper[4768]: E0223 18:51:40.738367 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8148745-3469-4ca2-a2dd-bc459d1b5eb7" containerName="mariadb-database-create" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738373 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8148745-3469-4ca2-a2dd-bc459d1b5eb7" containerName="mariadb-database-create" Feb 23 18:51:40 crc kubenswrapper[4768]: E0223 18:51:40.738379 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd539a7e-17cc-4c2a-a066-fecd85ee2261" containerName="mariadb-account-create-update" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738385 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd539a7e-17cc-4c2a-a066-fecd85ee2261" containerName="mariadb-account-create-update" Feb 23 18:51:40 crc kubenswrapper[4768]: E0223 18:51:40.738407 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c8735b-69ac-497c-8c20-08580587d926" containerName="mariadb-account-create-update" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738413 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c8735b-69ac-497c-8c20-08580587d926" containerName="mariadb-account-create-update" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738585 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8148745-3469-4ca2-a2dd-bc459d1b5eb7" containerName="mariadb-database-create" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738597 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c8735b-69ac-497c-8c20-08580587d926" containerName="mariadb-account-create-update" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738612 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a65a3f-ebd4-46e8-89bb-b402f6c91882" containerName="glance-log" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738622 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="124ee684-6570-4e6c-856b-516e1b2f793a" containerName="mariadb-database-create" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738631 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd539a7e-17cc-4c2a-a066-fecd85ee2261" containerName="mariadb-account-create-update" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738640 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="678921d0-cd54-4104-afdd-e6a47489b0e3" containerName="mariadb-database-create" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738648 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e834410-f86e-424f-81ac-73de81ffeb25" containerName="mariadb-account-create-update" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.738657 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a65a3f-ebd4-46e8-89bb-b402f6c91882" containerName="glance-httpd" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.739674 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.747942 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.748056 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.749864 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.812361 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.812642 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw49z\" (UniqueName: \"kubernetes.io/projected/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-kube-api-access-lw49z\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.812759 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.812868 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.812958 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-logs\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.813028 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.813151 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.813214 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.915167 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.915218 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw49z\" (UniqueName: \"kubernetes.io/projected/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-kube-api-access-lw49z\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.915243 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.915298 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.915334 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-logs\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.915356 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.915415 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.915432 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.915773 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.915913 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-logs\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.915923 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.921185 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.922198 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-config-data\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.924444 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.945060 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw49z\" (UniqueName: \"kubernetes.io/projected/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-kube-api-access-lw49z\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.957280 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae86f9fa-10bf-4fbc-b768-0ac7e643483b-scripts\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:40 crc kubenswrapper[4768]: I0223 18:51:40.966000 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"ae86f9fa-10bf-4fbc-b768-0ac7e643483b\") " pod="openstack/glance-default-external-api-0" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.066397 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.361025 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a65a3f-ebd4-46e8-89bb-b402f6c91882" path="/var/lib/kubelet/pods/e2a65a3f-ebd4-46e8-89bb-b402f6c91882/volumes" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.415390 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.505994 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.535783 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-ovndb-tls-certs\") pod \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.535918 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-httpd-config\") pod \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.535946 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-public-tls-certs\") pod \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.536092 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-internal-tls-certs\") pod \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.537190 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-config\") pod \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.537281 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-combined-ca-bundle\") pod \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.537326 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm6x7\" (UniqueName: \"kubernetes.io/projected/ef28ba99-309b-4f67-bf0a-e9e22e3808db-kube-api-access-cm6x7\") pod \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\" (UID: \"ef28ba99-309b-4f67-bf0a-e9e22e3808db\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.550547 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ef28ba99-309b-4f67-bf0a-e9e22e3808db" (UID: "ef28ba99-309b-4f67-bf0a-e9e22e3808db"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.561814 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef28ba99-309b-4f67-bf0a-e9e22e3808db-kube-api-access-cm6x7" (OuterVolumeSpecName: "kube-api-access-cm6x7") pod "ef28ba99-309b-4f67-bf0a-e9e22e3808db" (UID: "ef28ba99-309b-4f67-bf0a-e9e22e3808db"). InnerVolumeSpecName "kube-api-access-cm6x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.641387 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-config-data\") pod \"a00c3dcd-826d-486b-9879-6e45d61a9907\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.641564 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-scripts\") pod \"a00c3dcd-826d-486b-9879-6e45d61a9907\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.641634 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b88fx\" (UniqueName: \"kubernetes.io/projected/a00c3dcd-826d-486b-9879-6e45d61a9907-kube-api-access-b88fx\") pod \"a00c3dcd-826d-486b-9879-6e45d61a9907\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.641711 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a00c3dcd-826d-486b-9879-6e45d61a9907-httpd-run\") pod \"a00c3dcd-826d-486b-9879-6e45d61a9907\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.641756 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-internal-tls-certs\") pod \"a00c3dcd-826d-486b-9879-6e45d61a9907\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.641840 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a00c3dcd-826d-486b-9879-6e45d61a9907\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.641918 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-combined-ca-bundle\") pod \"a00c3dcd-826d-486b-9879-6e45d61a9907\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.641950 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a00c3dcd-826d-486b-9879-6e45d61a9907-logs\") pod \"a00c3dcd-826d-486b-9879-6e45d61a9907\" (UID: \"a00c3dcd-826d-486b-9879-6e45d61a9907\") " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.643464 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.643490 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm6x7\" (UniqueName: \"kubernetes.io/projected/ef28ba99-309b-4f67-bf0a-e9e22e3808db-kube-api-access-cm6x7\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.643883 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a00c3dcd-826d-486b-9879-6e45d61a9907-logs" (OuterVolumeSpecName: "logs") pod "a00c3dcd-826d-486b-9879-6e45d61a9907" (UID: "a00c3dcd-826d-486b-9879-6e45d61a9907"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.651002 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a00c3dcd-826d-486b-9879-6e45d61a9907-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a00c3dcd-826d-486b-9879-6e45d61a9907" (UID: "a00c3dcd-826d-486b-9879-6e45d61a9907"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.660852 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-scripts" (OuterVolumeSpecName: "scripts") pod "a00c3dcd-826d-486b-9879-6e45d61a9907" (UID: "a00c3dcd-826d-486b-9879-6e45d61a9907"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.660901 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "a00c3dcd-826d-486b-9879-6e45d61a9907" (UID: "a00c3dcd-826d-486b-9879-6e45d61a9907"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.665673 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a00c3dcd-826d-486b-9879-6e45d61a9907-kube-api-access-b88fx" (OuterVolumeSpecName: "kube-api-access-b88fx") pod "a00c3dcd-826d-486b-9879-6e45d61a9907" (UID: "a00c3dcd-826d-486b-9879-6e45d61a9907"). InnerVolumeSpecName "kube-api-access-b88fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.683211 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ef28ba99-309b-4f67-bf0a-e9e22e3808db" (UID: "ef28ba99-309b-4f67-bf0a-e9e22e3808db"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.685351 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a00c3dcd-826d-486b-9879-6e45d61a9907" (UID: "a00c3dcd-826d-486b-9879-6e45d61a9907"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.687827 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef28ba99-309b-4f67-bf0a-e9e22e3808db" (UID: "ef28ba99-309b-4f67-bf0a-e9e22e3808db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.698345 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-config" (OuterVolumeSpecName: "config") pod "ef28ba99-309b-4f67-bf0a-e9e22e3808db" (UID: "ef28ba99-309b-4f67-bf0a-e9e22e3808db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.729958 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.732758 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-config-data" (OuterVolumeSpecName: "config-data") pod "a00c3dcd-826d-486b-9879-6e45d61a9907" (UID: "a00c3dcd-826d-486b-9879-6e45d61a9907"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: W0223 18:51:41.732856 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae86f9fa_10bf_4fbc_b768_0ac7e643483b.slice/crio-56b744efe1309070573c0e05321f3ec4d9e6980a82c031d1e7e8bbc2f76ba98c WatchSource:0}: Error finding container 56b744efe1309070573c0e05321f3ec4d9e6980a82c031d1e7e8bbc2f76ba98c: Status 404 returned error can't find the container with id 56b744efe1309070573c0e05321f3ec4d9e6980a82c031d1e7e8bbc2f76ba98c Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.736805 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ef28ba99-309b-4f67-bf0a-e9e22e3808db" (UID: "ef28ba99-309b-4f67-bf0a-e9e22e3808db"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.738880 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a00c3dcd-826d-486b-9879-6e45d61a9907" (UID: "a00c3dcd-826d-486b-9879-6e45d61a9907"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.745240 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.745309 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.745320 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.745331 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.745340 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b88fx\" (UniqueName: \"kubernetes.io/projected/a00c3dcd-826d-486b-9879-6e45d61a9907-kube-api-access-b88fx\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.745348 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.745358 4768 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a00c3dcd-826d-486b-9879-6e45d61a9907-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.745366 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.745388 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.745397 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00c3dcd-826d-486b-9879-6e45d61a9907-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.745407 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a00c3dcd-826d-486b-9879-6e45d61a9907-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.745416 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.767842 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ef28ba99-309b-4f67-bf0a-e9e22e3808db" (UID: "ef28ba99-309b-4f67-bf0a-e9e22e3808db"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.768172 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.847267 4768 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ef28ba99-309b-4f67-bf0a-e9e22e3808db-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.847300 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.853033 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4adfc2e-38dd-497c-9a24-632365dc96ea","Type":"ContainerStarted","Data":"15104612d2bffc617491c8747812f4c132b0c366be211ae36fe467064946e30f"} Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.853179 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="ceilometer-central-agent" containerID="cri-o://a9bbb315128aaf6897f425eed34a0d220e409a67ad3ce72ccb43f1d6232abc37" gracePeriod=30 Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.853308 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="ceilometer-notification-agent" containerID="cri-o://19cd0ed487fa211e10e30af708999bf4ec2efdd21599154f885f6c4ba63de81c" gracePeriod=30 Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.853310 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="sg-core" containerID="cri-o://b35bb1b7f7170ab04d4178551b510bbd7ade3f3e63d9a6714f4c1df1574985e3" gracePeriod=30 Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.853285 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="proxy-httpd" containerID="cri-o://15104612d2bffc617491c8747812f4c132b0c366be211ae36fe467064946e30f" gracePeriod=30 Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.853423 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.856221 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae86f9fa-10bf-4fbc-b768-0ac7e643483b","Type":"ContainerStarted","Data":"56b744efe1309070573c0e05321f3ec4d9e6980a82c031d1e7e8bbc2f76ba98c"} Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.877125 4768 generic.go:334] "Generic (PLEG): container finished" podID="a00c3dcd-826d-486b-9879-6e45d61a9907" containerID="5404e02a66012b06ee00958e7e45bd26d45742029ca64e48b02155807c02ad62" exitCode=0 Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.877241 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a00c3dcd-826d-486b-9879-6e45d61a9907","Type":"ContainerDied","Data":"5404e02a66012b06ee00958e7e45bd26d45742029ca64e48b02155807c02ad62"} Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.877296 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a00c3dcd-826d-486b-9879-6e45d61a9907","Type":"ContainerDied","Data":"53704387cafa39f95630254fa6bae4f5d153807ab77741f6ac413545908d0ce2"} Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.877317 4768 scope.go:117] "RemoveContainer" containerID="5404e02a66012b06ee00958e7e45bd26d45742029ca64e48b02155807c02ad62" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.877489 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.886482 4768 generic.go:334] "Generic (PLEG): container finished" podID="ef28ba99-309b-4f67-bf0a-e9e22e3808db" containerID="7d033198bb76fc4edd1bfd0d2bffdf9886d003f17371d30cbaffd3efdd8b90a3" exitCode=0 Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.886600 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66d66bdc85-82928" event={"ID":"ef28ba99-309b-4f67-bf0a-e9e22e3808db","Type":"ContainerDied","Data":"7d033198bb76fc4edd1bfd0d2bffdf9886d003f17371d30cbaffd3efdd8b90a3"} Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.886636 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66d66bdc85-82928" event={"ID":"ef28ba99-309b-4f67-bf0a-e9e22e3808db","Type":"ContainerDied","Data":"90675218dc0f7add28852c31d0aef6e9a606107d12dc1878517659291260f06c"} Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.886738 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66d66bdc85-82928" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.897483 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.554656279 podStartE2EDuration="6.897464033s" podCreationTimestamp="2026-02-23 18:51:35 +0000 UTC" firstStartedPulling="2026-02-23 18:51:36.88305291 +0000 UTC m=+1092.273538710" lastFinishedPulling="2026-02-23 18:51:41.225860654 +0000 UTC m=+1096.616346464" observedRunningTime="2026-02-23 18:51:41.877863906 +0000 UTC m=+1097.268349706" watchObservedRunningTime="2026-02-23 18:51:41.897464033 +0000 UTC m=+1097.287949833" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.957384 4768 scope.go:117] "RemoveContainer" containerID="0b9a843ce48640fccbf8a6635c196c4a04c6f208b993f259909773d8e86e12eb" Feb 23 18:51:41 crc kubenswrapper[4768]: I0223 18:51:41.973513 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.002738 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.019061 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66d66bdc85-82928"] Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.039170 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-66d66bdc85-82928"] Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.048565 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:51:42 crc kubenswrapper[4768]: E0223 18:51:42.049079 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00c3dcd-826d-486b-9879-6e45d61a9907" containerName="glance-log" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.049103 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00c3dcd-826d-486b-9879-6e45d61a9907" containerName="glance-log" Feb 23 18:51:42 crc kubenswrapper[4768]: E0223 18:51:42.049121 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef28ba99-309b-4f67-bf0a-e9e22e3808db" containerName="neutron-httpd" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.049129 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef28ba99-309b-4f67-bf0a-e9e22e3808db" containerName="neutron-httpd" Feb 23 18:51:42 crc kubenswrapper[4768]: E0223 18:51:42.049150 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef28ba99-309b-4f67-bf0a-e9e22e3808db" containerName="neutron-api" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.049157 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef28ba99-309b-4f67-bf0a-e9e22e3808db" containerName="neutron-api" Feb 23 18:51:42 crc kubenswrapper[4768]: E0223 18:51:42.049173 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a00c3dcd-826d-486b-9879-6e45d61a9907" containerName="glance-httpd" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.049183 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a00c3dcd-826d-486b-9879-6e45d61a9907" containerName="glance-httpd" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.049394 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef28ba99-309b-4f67-bf0a-e9e22e3808db" containerName="neutron-httpd" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.049418 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef28ba99-309b-4f67-bf0a-e9e22e3808db" containerName="neutron-api" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.049426 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00c3dcd-826d-486b-9879-6e45d61a9907" containerName="glance-httpd" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.049439 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a00c3dcd-826d-486b-9879-6e45d61a9907" containerName="glance-log" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.050570 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.053771 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.054363 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.055681 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.101214 4768 scope.go:117] "RemoveContainer" containerID="5404e02a66012b06ee00958e7e45bd26d45742029ca64e48b02155807c02ad62" Feb 23 18:51:42 crc kubenswrapper[4768]: E0223 18:51:42.106835 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5404e02a66012b06ee00958e7e45bd26d45742029ca64e48b02155807c02ad62\": container with ID starting with 5404e02a66012b06ee00958e7e45bd26d45742029ca64e48b02155807c02ad62 not found: ID does not exist" containerID="5404e02a66012b06ee00958e7e45bd26d45742029ca64e48b02155807c02ad62" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.106892 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5404e02a66012b06ee00958e7e45bd26d45742029ca64e48b02155807c02ad62"} err="failed to get container status \"5404e02a66012b06ee00958e7e45bd26d45742029ca64e48b02155807c02ad62\": rpc error: code = NotFound desc = could not find container \"5404e02a66012b06ee00958e7e45bd26d45742029ca64e48b02155807c02ad62\": container with ID starting with 5404e02a66012b06ee00958e7e45bd26d45742029ca64e48b02155807c02ad62 not found: ID does not exist" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.106923 4768 scope.go:117] "RemoveContainer" containerID="0b9a843ce48640fccbf8a6635c196c4a04c6f208b993f259909773d8e86e12eb" Feb 23 18:51:42 crc kubenswrapper[4768]: E0223 18:51:42.107241 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b9a843ce48640fccbf8a6635c196c4a04c6f208b993f259909773d8e86e12eb\": container with ID starting with 0b9a843ce48640fccbf8a6635c196c4a04c6f208b993f259909773d8e86e12eb not found: ID does not exist" containerID="0b9a843ce48640fccbf8a6635c196c4a04c6f208b993f259909773d8e86e12eb" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.107289 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b9a843ce48640fccbf8a6635c196c4a04c6f208b993f259909773d8e86e12eb"} err="failed to get container status \"0b9a843ce48640fccbf8a6635c196c4a04c6f208b993f259909773d8e86e12eb\": rpc error: code = NotFound desc = could not find container \"0b9a843ce48640fccbf8a6635c196c4a04c6f208b993f259909773d8e86e12eb\": container with ID starting with 0b9a843ce48640fccbf8a6635c196c4a04c6f208b993f259909773d8e86e12eb not found: ID does not exist" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.107313 4768 scope.go:117] "RemoveContainer" containerID="4f6acda68047a479fd590a9e5cd03989179226b33f7b202e33f8e5fae69fcfaf" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.134601 4768 scope.go:117] "RemoveContainer" containerID="7d033198bb76fc4edd1bfd0d2bffdf9886d003f17371d30cbaffd3efdd8b90a3" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.156135 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95f60b43-7764-4d1c-bf7f-150e7fceef75-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.156207 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95f60b43-7764-4d1c-bf7f-150e7fceef75-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.156386 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrr4r\" (UniqueName: \"kubernetes.io/projected/95f60b43-7764-4d1c-bf7f-150e7fceef75-kube-api-access-zrr4r\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.156454 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.156505 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95f60b43-7764-4d1c-bf7f-150e7fceef75-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.156644 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95f60b43-7764-4d1c-bf7f-150e7fceef75-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.156670 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95f60b43-7764-4d1c-bf7f-150e7fceef75-logs\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.157582 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95f60b43-7764-4d1c-bf7f-150e7fceef75-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.166274 4768 scope.go:117] "RemoveContainer" containerID="4f6acda68047a479fd590a9e5cd03989179226b33f7b202e33f8e5fae69fcfaf" Feb 23 18:51:42 crc kubenswrapper[4768]: E0223 18:51:42.167285 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f6acda68047a479fd590a9e5cd03989179226b33f7b202e33f8e5fae69fcfaf\": container with ID starting with 4f6acda68047a479fd590a9e5cd03989179226b33f7b202e33f8e5fae69fcfaf not found: ID does not exist" containerID="4f6acda68047a479fd590a9e5cd03989179226b33f7b202e33f8e5fae69fcfaf" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.167325 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f6acda68047a479fd590a9e5cd03989179226b33f7b202e33f8e5fae69fcfaf"} err="failed to get container status \"4f6acda68047a479fd590a9e5cd03989179226b33f7b202e33f8e5fae69fcfaf\": rpc error: code = NotFound desc = could not find container \"4f6acda68047a479fd590a9e5cd03989179226b33f7b202e33f8e5fae69fcfaf\": container with ID starting with 4f6acda68047a479fd590a9e5cd03989179226b33f7b202e33f8e5fae69fcfaf not found: ID does not exist" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.167354 4768 scope.go:117] "RemoveContainer" containerID="7d033198bb76fc4edd1bfd0d2bffdf9886d003f17371d30cbaffd3efdd8b90a3" Feb 23 18:51:42 crc kubenswrapper[4768]: E0223 18:51:42.168068 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d033198bb76fc4edd1bfd0d2bffdf9886d003f17371d30cbaffd3efdd8b90a3\": container with ID starting with 7d033198bb76fc4edd1bfd0d2bffdf9886d003f17371d30cbaffd3efdd8b90a3 not found: ID does not exist" containerID="7d033198bb76fc4edd1bfd0d2bffdf9886d003f17371d30cbaffd3efdd8b90a3" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.168093 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d033198bb76fc4edd1bfd0d2bffdf9886d003f17371d30cbaffd3efdd8b90a3"} err="failed to get container status \"7d033198bb76fc4edd1bfd0d2bffdf9886d003f17371d30cbaffd3efdd8b90a3\": rpc error: code = NotFound desc = could not find container \"7d033198bb76fc4edd1bfd0d2bffdf9886d003f17371d30cbaffd3efdd8b90a3\": container with ID starting with 7d033198bb76fc4edd1bfd0d2bffdf9886d003f17371d30cbaffd3efdd8b90a3 not found: ID does not exist" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.259598 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95f60b43-7764-4d1c-bf7f-150e7fceef75-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.259664 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95f60b43-7764-4d1c-bf7f-150e7fceef75-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.259708 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrr4r\" (UniqueName: \"kubernetes.io/projected/95f60b43-7764-4d1c-bf7f-150e7fceef75-kube-api-access-zrr4r\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.259731 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.259755 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95f60b43-7764-4d1c-bf7f-150e7fceef75-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.259800 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95f60b43-7764-4d1c-bf7f-150e7fceef75-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.259819 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95f60b43-7764-4d1c-bf7f-150e7fceef75-logs\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.259851 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95f60b43-7764-4d1c-bf7f-150e7fceef75-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.260435 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.260566 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95f60b43-7764-4d1c-bf7f-150e7fceef75-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.260858 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95f60b43-7764-4d1c-bf7f-150e7fceef75-logs\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.264575 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95f60b43-7764-4d1c-bf7f-150e7fceef75-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.266733 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95f60b43-7764-4d1c-bf7f-150e7fceef75-scripts\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.268663 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95f60b43-7764-4d1c-bf7f-150e7fceef75-config-data\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.274576 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/95f60b43-7764-4d1c-bf7f-150e7fceef75-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.277338 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrr4r\" (UniqueName: \"kubernetes.io/projected/95f60b43-7764-4d1c-bf7f-150e7fceef75-kube-api-access-zrr4r\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.290003 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"95f60b43-7764-4d1c-bf7f-150e7fceef75\") " pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.410117 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.915307 4768 generic.go:334] "Generic (PLEG): container finished" podID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerID="15104612d2bffc617491c8747812f4c132b0c366be211ae36fe467064946e30f" exitCode=0 Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.915357 4768 generic.go:334] "Generic (PLEG): container finished" podID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerID="b35bb1b7f7170ab04d4178551b510bbd7ade3f3e63d9a6714f4c1df1574985e3" exitCode=2 Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.915369 4768 generic.go:334] "Generic (PLEG): container finished" podID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerID="19cd0ed487fa211e10e30af708999bf4ec2efdd21599154f885f6c4ba63de81c" exitCode=0 Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.915381 4768 generic.go:334] "Generic (PLEG): container finished" podID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerID="a9bbb315128aaf6897f425eed34a0d220e409a67ad3ce72ccb43f1d6232abc37" exitCode=0 Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.915390 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4adfc2e-38dd-497c-9a24-632365dc96ea","Type":"ContainerDied","Data":"15104612d2bffc617491c8747812f4c132b0c366be211ae36fe467064946e30f"} Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.915458 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4adfc2e-38dd-497c-9a24-632365dc96ea","Type":"ContainerDied","Data":"b35bb1b7f7170ab04d4178551b510bbd7ade3f3e63d9a6714f4c1df1574985e3"} Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.915473 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4adfc2e-38dd-497c-9a24-632365dc96ea","Type":"ContainerDied","Data":"19cd0ed487fa211e10e30af708999bf4ec2efdd21599154f885f6c4ba63de81c"} Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.915488 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4adfc2e-38dd-497c-9a24-632365dc96ea","Type":"ContainerDied","Data":"a9bbb315128aaf6897f425eed34a0d220e409a67ad3ce72ccb43f1d6232abc37"} Feb 23 18:51:42 crc kubenswrapper[4768]: I0223 18:51:42.919820 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae86f9fa-10bf-4fbc-b768-0ac7e643483b","Type":"ContainerStarted","Data":"1cee518a5652ee7d69ed4060b8f35a2d36bac537f51c042754f580050c00dbb1"} Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.080703 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.115324 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.117927 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-66dcf5bf6c-4q2hn" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.219238 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.283403 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-sg-core-conf-yaml\") pod \"b4adfc2e-38dd-497c-9a24-632365dc96ea\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.283587 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-config-data\") pod \"b4adfc2e-38dd-497c-9a24-632365dc96ea\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.283808 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-scripts\") pod \"b4adfc2e-38dd-497c-9a24-632365dc96ea\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.283886 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whmbg\" (UniqueName: \"kubernetes.io/projected/b4adfc2e-38dd-497c-9a24-632365dc96ea-kube-api-access-whmbg\") pod \"b4adfc2e-38dd-497c-9a24-632365dc96ea\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.283916 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-combined-ca-bundle\") pod \"b4adfc2e-38dd-497c-9a24-632365dc96ea\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.284021 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4adfc2e-38dd-497c-9a24-632365dc96ea-log-httpd\") pod \"b4adfc2e-38dd-497c-9a24-632365dc96ea\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.284041 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4adfc2e-38dd-497c-9a24-632365dc96ea-run-httpd\") pod \"b4adfc2e-38dd-497c-9a24-632365dc96ea\" (UID: \"b4adfc2e-38dd-497c-9a24-632365dc96ea\") " Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.284931 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4adfc2e-38dd-497c-9a24-632365dc96ea-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b4adfc2e-38dd-497c-9a24-632365dc96ea" (UID: "b4adfc2e-38dd-497c-9a24-632365dc96ea"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.285443 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4adfc2e-38dd-497c-9a24-632365dc96ea-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.285953 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4adfc2e-38dd-497c-9a24-632365dc96ea-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b4adfc2e-38dd-497c-9a24-632365dc96ea" (UID: "b4adfc2e-38dd-497c-9a24-632365dc96ea"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.289475 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4adfc2e-38dd-497c-9a24-632365dc96ea-kube-api-access-whmbg" (OuterVolumeSpecName: "kube-api-access-whmbg") pod "b4adfc2e-38dd-497c-9a24-632365dc96ea" (UID: "b4adfc2e-38dd-497c-9a24-632365dc96ea"). InnerVolumeSpecName "kube-api-access-whmbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.292351 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-scripts" (OuterVolumeSpecName: "scripts") pod "b4adfc2e-38dd-497c-9a24-632365dc96ea" (UID: "b4adfc2e-38dd-497c-9a24-632365dc96ea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.314485 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b4adfc2e-38dd-497c-9a24-632365dc96ea" (UID: "b4adfc2e-38dd-497c-9a24-632365dc96ea"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.325645 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a00c3dcd-826d-486b-9879-6e45d61a9907" path="/var/lib/kubelet/pods/a00c3dcd-826d-486b-9879-6e45d61a9907/volumes" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.326561 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef28ba99-309b-4f67-bf0a-e9e22e3808db" path="/var/lib/kubelet/pods/ef28ba99-309b-4f67-bf0a-e9e22e3808db/volumes" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.369616 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4adfc2e-38dd-497c-9a24-632365dc96ea" (UID: "b4adfc2e-38dd-497c-9a24-632365dc96ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.387347 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.387383 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.387395 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whmbg\" (UniqueName: \"kubernetes.io/projected/b4adfc2e-38dd-497c-9a24-632365dc96ea-kube-api-access-whmbg\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.387404 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.387412 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b4adfc2e-38dd-497c-9a24-632365dc96ea-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.404739 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-config-data" (OuterVolumeSpecName: "config-data") pod "b4adfc2e-38dd-497c-9a24-632365dc96ea" (UID: "b4adfc2e-38dd-497c-9a24-632365dc96ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.489646 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4adfc2e-38dd-497c-9a24-632365dc96ea-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.875212 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7zx9"] Feb 23 18:51:43 crc kubenswrapper[4768]: E0223 18:51:43.875896 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="ceilometer-notification-agent" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.875917 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="ceilometer-notification-agent" Feb 23 18:51:43 crc kubenswrapper[4768]: E0223 18:51:43.875939 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="ceilometer-central-agent" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.875946 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="ceilometer-central-agent" Feb 23 18:51:43 crc kubenswrapper[4768]: E0223 18:51:43.875957 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="proxy-httpd" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.875963 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="proxy-httpd" Feb 23 18:51:43 crc kubenswrapper[4768]: E0223 18:51:43.875983 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="sg-core" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.875989 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="sg-core" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.876142 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="ceilometer-central-agent" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.876164 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="proxy-httpd" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.876174 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="sg-core" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.876183 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" containerName="ceilometer-notification-agent" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.876757 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.879215 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.879345 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.879228 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r828c" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.887565 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7zx9"] Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.945206 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ae86f9fa-10bf-4fbc-b768-0ac7e643483b","Type":"ContainerStarted","Data":"9901d9a1ba50a34adfa2ad149c37b707986d3b67b7cae0f7b57ecb68a413d49f"} Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.948058 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95f60b43-7764-4d1c-bf7f-150e7fceef75","Type":"ContainerStarted","Data":"36a7c0a11fff7c782488401004d507e976cf12772c31f2513260c55e73f5a00f"} Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.951395 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b4adfc2e-38dd-497c-9a24-632365dc96ea","Type":"ContainerDied","Data":"87abd56f5f576f22c025e3e045d9c8cf77b1f669e31600c43606fab68b07caf0"} Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.951452 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.951461 4768 scope.go:117] "RemoveContainer" containerID="15104612d2bffc617491c8747812f4c132b0c366be211ae36fe467064946e30f" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.975817 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.9757702889999997 podStartE2EDuration="3.975770289s" podCreationTimestamp="2026-02-23 18:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:43.966528166 +0000 UTC m=+1099.357013966" watchObservedRunningTime="2026-02-23 18:51:43.975770289 +0000 UTC m=+1099.366256089" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.998294 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-config-data\") pod \"nova-cell0-conductor-db-sync-x7zx9\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.998594 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-scripts\") pod \"nova-cell0-conductor-db-sync-x7zx9\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.999087 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4gf2\" (UniqueName: \"kubernetes.io/projected/3214f46e-82ed-43c6-90ab-e3c001ddb38c-kube-api-access-c4gf2\") pod \"nova-cell0-conductor-db-sync-x7zx9\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:43 crc kubenswrapper[4768]: I0223 18:51:43.999392 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x7zx9\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.018821 4768 scope.go:117] "RemoveContainer" containerID="b35bb1b7f7170ab04d4178551b510bbd7ade3f3e63d9a6714f4c1df1574985e3" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.036717 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.046415 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.054096 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.056827 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.062563 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.065382 4768 scope.go:117] "RemoveContainer" containerID="19cd0ed487fa211e10e30af708999bf4ec2efdd21599154f885f6c4ba63de81c" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.065470 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.099723 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.100806 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4gf2\" (UniqueName: \"kubernetes.io/projected/3214f46e-82ed-43c6-90ab-e3c001ddb38c-kube-api-access-c4gf2\") pod \"nova-cell0-conductor-db-sync-x7zx9\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.100876 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.100928 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/effaa5c0-5154-4b0b-b231-4fb61bf4d011-run-httpd\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.100964 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-scripts\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.101018 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x7zx9\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.101048 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.101087 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-config-data\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.101119 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-config-data\") pod \"nova-cell0-conductor-db-sync-x7zx9\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.101170 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/effaa5c0-5154-4b0b-b231-4fb61bf4d011-log-httpd\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.101192 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcv69\" (UniqueName: \"kubernetes.io/projected/effaa5c0-5154-4b0b-b231-4fb61bf4d011-kube-api-access-lcv69\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.101219 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-scripts\") pod \"nova-cell0-conductor-db-sync-x7zx9\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.111564 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-scripts\") pod \"nova-cell0-conductor-db-sync-x7zx9\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.120255 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4gf2\" (UniqueName: \"kubernetes.io/projected/3214f46e-82ed-43c6-90ab-e3c001ddb38c-kube-api-access-c4gf2\") pod \"nova-cell0-conductor-db-sync-x7zx9\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.122167 4768 scope.go:117] "RemoveContainer" containerID="a9bbb315128aaf6897f425eed34a0d220e409a67ad3ce72ccb43f1d6232abc37" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.123869 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-x7zx9\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.124690 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-config-data\") pod \"nova-cell0-conductor-db-sync-x7zx9\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.192702 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.203087 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.203228 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-config-data\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.203296 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/effaa5c0-5154-4b0b-b231-4fb61bf4d011-log-httpd\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.203331 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcv69\" (UniqueName: \"kubernetes.io/projected/effaa5c0-5154-4b0b-b231-4fb61bf4d011-kube-api-access-lcv69\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.203415 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.203444 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/effaa5c0-5154-4b0b-b231-4fb61bf4d011-run-httpd\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.203484 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-scripts\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.204555 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/effaa5c0-5154-4b0b-b231-4fb61bf4d011-log-httpd\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.205434 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/effaa5c0-5154-4b0b-b231-4fb61bf4d011-run-httpd\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.207161 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.211069 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-config-data\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.211841 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.212922 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-scripts\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.220040 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcv69\" (UniqueName: \"kubernetes.io/projected/effaa5c0-5154-4b0b-b231-4fb61bf4d011-kube-api-access-lcv69\") pod \"ceilometer-0\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.387770 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.694516 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7zx9"] Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.940945 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.966205 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95f60b43-7764-4d1c-bf7f-150e7fceef75","Type":"ContainerStarted","Data":"84c77fdc1a3463bfc40e453c3ee5586e6587187edff585e6e06ad30958df8456"} Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.966281 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"95f60b43-7764-4d1c-bf7f-150e7fceef75","Type":"ContainerStarted","Data":"1bfc10c3fb9afbdf4218a429b3d5c262ab0653874a1ea30d9bdeff508039a04e"} Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.970233 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x7zx9" event={"ID":"3214f46e-82ed-43c6-90ab-e3c001ddb38c","Type":"ContainerStarted","Data":"894ea15a1af104f9bd477e0ce151541ea22c23129898cf805d51d129fffdf3c7"} Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.971366 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"effaa5c0-5154-4b0b-b231-4fb61bf4d011","Type":"ContainerStarted","Data":"142bfb18a59324755dd4ef8f6d3573838b3213bacf4f0f12858e8ef7dd5746c6"} Feb 23 18:51:44 crc kubenswrapper[4768]: I0223 18:51:44.989761 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.9897471810000003 podStartE2EDuration="3.989747181s" podCreationTimestamp="2026-02-23 18:51:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:51:44.986742308 +0000 UTC m=+1100.377228108" watchObservedRunningTime="2026-02-23 18:51:44.989747181 +0000 UTC m=+1100.380232981" Feb 23 18:51:45 crc kubenswrapper[4768]: I0223 18:51:45.338124 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4adfc2e-38dd-497c-9a24-632365dc96ea" path="/var/lib/kubelet/pods/b4adfc2e-38dd-497c-9a24-632365dc96ea/volumes" Feb 23 18:51:45 crc kubenswrapper[4768]: I0223 18:51:45.998014 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"effaa5c0-5154-4b0b-b231-4fb61bf4d011","Type":"ContainerStarted","Data":"4e581377185f7fe6621082436542c8a2b2f88f735ef046b04199d1d57500e16b"} Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.018292 4768 generic.go:334] "Generic (PLEG): container finished" podID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerID="5beaf90673a241480f2721b2cb11d0bf9f251a26131590b7450193ab00ec0e69" exitCode=137 Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.019823 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84699c9d66-ghjfn" event={"ID":"c46ebaa2-3910-4025-8420-71eb83b3a909","Type":"ContainerDied","Data":"5beaf90673a241480f2721b2cb11d0bf9f251a26131590b7450193ab00ec0e69"} Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.286107 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.464978 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c46ebaa2-3910-4025-8420-71eb83b3a909-config-data\") pod \"c46ebaa2-3910-4025-8420-71eb83b3a909\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.465015 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c46ebaa2-3910-4025-8420-71eb83b3a909-scripts\") pod \"c46ebaa2-3910-4025-8420-71eb83b3a909\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.465039 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c46ebaa2-3910-4025-8420-71eb83b3a909-logs\") pod \"c46ebaa2-3910-4025-8420-71eb83b3a909\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.465118 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qtxb\" (UniqueName: \"kubernetes.io/projected/c46ebaa2-3910-4025-8420-71eb83b3a909-kube-api-access-7qtxb\") pod \"c46ebaa2-3910-4025-8420-71eb83b3a909\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.465173 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-combined-ca-bundle\") pod \"c46ebaa2-3910-4025-8420-71eb83b3a909\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.465199 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-horizon-tls-certs\") pod \"c46ebaa2-3910-4025-8420-71eb83b3a909\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.465275 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-horizon-secret-key\") pod \"c46ebaa2-3910-4025-8420-71eb83b3a909\" (UID: \"c46ebaa2-3910-4025-8420-71eb83b3a909\") " Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.467847 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c46ebaa2-3910-4025-8420-71eb83b3a909-logs" (OuterVolumeSpecName: "logs") pod "c46ebaa2-3910-4025-8420-71eb83b3a909" (UID: "c46ebaa2-3910-4025-8420-71eb83b3a909"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.472445 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c46ebaa2-3910-4025-8420-71eb83b3a909" (UID: "c46ebaa2-3910-4025-8420-71eb83b3a909"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.487504 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c46ebaa2-3910-4025-8420-71eb83b3a909-kube-api-access-7qtxb" (OuterVolumeSpecName: "kube-api-access-7qtxb") pod "c46ebaa2-3910-4025-8420-71eb83b3a909" (UID: "c46ebaa2-3910-4025-8420-71eb83b3a909"). InnerVolumeSpecName "kube-api-access-7qtxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.499510 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c46ebaa2-3910-4025-8420-71eb83b3a909" (UID: "c46ebaa2-3910-4025-8420-71eb83b3a909"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.499966 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c46ebaa2-3910-4025-8420-71eb83b3a909-config-data" (OuterVolumeSpecName: "config-data") pod "c46ebaa2-3910-4025-8420-71eb83b3a909" (UID: "c46ebaa2-3910-4025-8420-71eb83b3a909"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.527292 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "c46ebaa2-3910-4025-8420-71eb83b3a909" (UID: "c46ebaa2-3910-4025-8420-71eb83b3a909"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.533624 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c46ebaa2-3910-4025-8420-71eb83b3a909-scripts" (OuterVolumeSpecName: "scripts") pod "c46ebaa2-3910-4025-8420-71eb83b3a909" (UID: "c46ebaa2-3910-4025-8420-71eb83b3a909"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.569106 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c46ebaa2-3910-4025-8420-71eb83b3a909-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.569581 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c46ebaa2-3910-4025-8420-71eb83b3a909-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.569596 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c46ebaa2-3910-4025-8420-71eb83b3a909-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.569672 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qtxb\" (UniqueName: \"kubernetes.io/projected/c46ebaa2-3910-4025-8420-71eb83b3a909-kube-api-access-7qtxb\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.569686 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.569698 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.569708 4768 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c46ebaa2-3910-4025-8420-71eb83b3a909-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 23 18:51:46 crc kubenswrapper[4768]: I0223 18:51:46.625404 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:51:47 crc kubenswrapper[4768]: I0223 18:51:47.033103 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"effaa5c0-5154-4b0b-b231-4fb61bf4d011","Type":"ContainerStarted","Data":"b3c37db8e7b8b82ff3680d2c3dd113fbc2e2c4889a52ecdb49395f2756b585f9"} Feb 23 18:51:47 crc kubenswrapper[4768]: I0223 18:51:47.035345 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-84699c9d66-ghjfn" event={"ID":"c46ebaa2-3910-4025-8420-71eb83b3a909","Type":"ContainerDied","Data":"3cb7e26cc90bc3c2d0930b14e25e4148ddb682d3c42cd6d599aeb15673afbc18"} Feb 23 18:51:47 crc kubenswrapper[4768]: I0223 18:51:47.035413 4768 scope.go:117] "RemoveContainer" containerID="a8b896bc35a90342c52e7fd2aa30b84aefe074f3b241b438ecfa2e1f371e5920" Feb 23 18:51:47 crc kubenswrapper[4768]: I0223 18:51:47.035411 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-84699c9d66-ghjfn" Feb 23 18:51:47 crc kubenswrapper[4768]: I0223 18:51:47.085616 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-84699c9d66-ghjfn"] Feb 23 18:51:47 crc kubenswrapper[4768]: I0223 18:51:47.093262 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-84699c9d66-ghjfn"] Feb 23 18:51:47 crc kubenswrapper[4768]: I0223 18:51:47.250437 4768 scope.go:117] "RemoveContainer" containerID="5beaf90673a241480f2721b2cb11d0bf9f251a26131590b7450193ab00ec0e69" Feb 23 18:51:47 crc kubenswrapper[4768]: I0223 18:51:47.319216 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" path="/var/lib/kubelet/pods/c46ebaa2-3910-4025-8420-71eb83b3a909/volumes" Feb 23 18:51:48 crc kubenswrapper[4768]: I0223 18:51:48.064110 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"effaa5c0-5154-4b0b-b231-4fb61bf4d011","Type":"ContainerStarted","Data":"147974c5d553fef42a359d095b42274ad45c5f007024f29dac40be36dcebe088"} Feb 23 18:51:51 crc kubenswrapper[4768]: I0223 18:51:51.068071 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 18:51:51 crc kubenswrapper[4768]: I0223 18:51:51.068153 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 23 18:51:51 crc kubenswrapper[4768]: I0223 18:51:51.111478 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 18:51:51 crc kubenswrapper[4768]: I0223 18:51:51.111955 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 18:51:51 crc kubenswrapper[4768]: I0223 18:51:51.113306 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 23 18:51:52 crc kubenswrapper[4768]: I0223 18:51:52.110043 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 23 18:51:52 crc kubenswrapper[4768]: I0223 18:51:52.410577 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 18:51:52 crc kubenswrapper[4768]: I0223 18:51:52.410623 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 23 18:51:52 crc kubenswrapper[4768]: I0223 18:51:52.453757 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 18:51:52 crc kubenswrapper[4768]: I0223 18:51:52.466567 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 23 18:51:53 crc kubenswrapper[4768]: I0223 18:51:53.119336 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x7zx9" event={"ID":"3214f46e-82ed-43c6-90ab-e3c001ddb38c","Type":"ContainerStarted","Data":"d4da2e2667ee9b9c1780416afcb349b28083401cf2933a91cf4459b7fea12e5f"} Feb 23 18:51:53 crc kubenswrapper[4768]: I0223 18:51:53.123104 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 18:51:53 crc kubenswrapper[4768]: I0223 18:51:53.123970 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="ceilometer-central-agent" containerID="cri-o://4e581377185f7fe6621082436542c8a2b2f88f735ef046b04199d1d57500e16b" gracePeriod=30 Feb 23 18:51:53 crc kubenswrapper[4768]: I0223 18:51:53.124131 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="sg-core" containerID="cri-o://147974c5d553fef42a359d095b42274ad45c5f007024f29dac40be36dcebe088" gracePeriod=30 Feb 23 18:51:53 crc kubenswrapper[4768]: I0223 18:51:53.124207 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="proxy-httpd" containerID="cri-o://56ffb163f59ebb5bf713213637c810e8ad4e331fabc341f4f799af56391cf1e5" gracePeriod=30 Feb 23 18:51:53 crc kubenswrapper[4768]: I0223 18:51:53.124270 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="ceilometer-notification-agent" containerID="cri-o://b3c37db8e7b8b82ff3680d2c3dd113fbc2e2c4889a52ecdb49395f2756b585f9" gracePeriod=30 Feb 23 18:51:53 crc kubenswrapper[4768]: I0223 18:51:53.124475 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"effaa5c0-5154-4b0b-b231-4fb61bf4d011","Type":"ContainerStarted","Data":"56ffb163f59ebb5bf713213637c810e8ad4e331fabc341f4f799af56391cf1e5"} Feb 23 18:51:53 crc kubenswrapper[4768]: I0223 18:51:53.124537 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 18:51:53 crc kubenswrapper[4768]: I0223 18:51:53.124553 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 23 18:51:53 crc kubenswrapper[4768]: I0223 18:51:53.142577 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-x7zx9" podStartSLOduration=2.080720938 podStartE2EDuration="10.142556182s" podCreationTimestamp="2026-02-23 18:51:43 +0000 UTC" firstStartedPulling="2026-02-23 18:51:44.703066913 +0000 UTC m=+1100.093552713" lastFinishedPulling="2026-02-23 18:51:52.764902157 +0000 UTC m=+1108.155387957" observedRunningTime="2026-02-23 18:51:53.13953884 +0000 UTC m=+1108.530024640" watchObservedRunningTime="2026-02-23 18:51:53.142556182 +0000 UTC m=+1108.533041982" Feb 23 18:51:53 crc kubenswrapper[4768]: I0223 18:51:53.177269 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.36786419 podStartE2EDuration="9.177236181s" podCreationTimestamp="2026-02-23 18:51:44 +0000 UTC" firstStartedPulling="2026-02-23 18:51:44.94655728 +0000 UTC m=+1100.337043090" lastFinishedPulling="2026-02-23 18:51:52.755929281 +0000 UTC m=+1108.146415081" observedRunningTime="2026-02-23 18:51:53.170522467 +0000 UTC m=+1108.561008267" watchObservedRunningTime="2026-02-23 18:51:53.177236181 +0000 UTC m=+1108.567721971" Feb 23 18:51:53 crc kubenswrapper[4768]: I0223 18:51:53.299560 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 18:51:54 crc kubenswrapper[4768]: I0223 18:51:54.136702 4768 generic.go:334] "Generic (PLEG): container finished" podID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerID="147974c5d553fef42a359d095b42274ad45c5f007024f29dac40be36dcebe088" exitCode=2 Feb 23 18:51:54 crc kubenswrapper[4768]: I0223 18:51:54.137133 4768 generic.go:334] "Generic (PLEG): container finished" podID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerID="b3c37db8e7b8b82ff3680d2c3dd113fbc2e2c4889a52ecdb49395f2756b585f9" exitCode=0 Feb 23 18:51:54 crc kubenswrapper[4768]: I0223 18:51:54.137151 4768 generic.go:334] "Generic (PLEG): container finished" podID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerID="4e581377185f7fe6621082436542c8a2b2f88f735ef046b04199d1d57500e16b" exitCode=0 Feb 23 18:51:54 crc kubenswrapper[4768]: I0223 18:51:54.138546 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"effaa5c0-5154-4b0b-b231-4fb61bf4d011","Type":"ContainerDied","Data":"147974c5d553fef42a359d095b42274ad45c5f007024f29dac40be36dcebe088"} Feb 23 18:51:54 crc kubenswrapper[4768]: I0223 18:51:54.138587 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"effaa5c0-5154-4b0b-b231-4fb61bf4d011","Type":"ContainerDied","Data":"b3c37db8e7b8b82ff3680d2c3dd113fbc2e2c4889a52ecdb49395f2756b585f9"} Feb 23 18:51:54 crc kubenswrapper[4768]: I0223 18:51:54.138612 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"effaa5c0-5154-4b0b-b231-4fb61bf4d011","Type":"ContainerDied","Data":"4e581377185f7fe6621082436542c8a2b2f88f735ef046b04199d1d57500e16b"} Feb 23 18:51:54 crc kubenswrapper[4768]: I0223 18:51:54.138675 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 18:51:54 crc kubenswrapper[4768]: I0223 18:51:54.197812 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 23 18:51:55 crc kubenswrapper[4768]: I0223 18:51:55.107996 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 18:51:55 crc kubenswrapper[4768]: I0223 18:51:55.146029 4768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 23 18:51:55 crc kubenswrapper[4768]: I0223 18:51:55.276916 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 23 18:52:05 crc kubenswrapper[4768]: I0223 18:52:05.263100 4768 generic.go:334] "Generic (PLEG): container finished" podID="3214f46e-82ed-43c6-90ab-e3c001ddb38c" containerID="d4da2e2667ee9b9c1780416afcb349b28083401cf2933a91cf4459b7fea12e5f" exitCode=0 Feb 23 18:52:05 crc kubenswrapper[4768]: I0223 18:52:05.263332 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x7zx9" event={"ID":"3214f46e-82ed-43c6-90ab-e3c001ddb38c","Type":"ContainerDied","Data":"d4da2e2667ee9b9c1780416afcb349b28083401cf2933a91cf4459b7fea12e5f"} Feb 23 18:52:06 crc kubenswrapper[4768]: I0223 18:52:06.683651 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:52:06 crc kubenswrapper[4768]: I0223 18:52:06.786588 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-scripts\") pod \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " Feb 23 18:52:06 crc kubenswrapper[4768]: I0223 18:52:06.786848 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4gf2\" (UniqueName: \"kubernetes.io/projected/3214f46e-82ed-43c6-90ab-e3c001ddb38c-kube-api-access-c4gf2\") pod \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " Feb 23 18:52:06 crc kubenswrapper[4768]: I0223 18:52:06.787014 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-config-data\") pod \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " Feb 23 18:52:06 crc kubenswrapper[4768]: I0223 18:52:06.787087 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-combined-ca-bundle\") pod \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\" (UID: \"3214f46e-82ed-43c6-90ab-e3c001ddb38c\") " Feb 23 18:52:06 crc kubenswrapper[4768]: I0223 18:52:06.793678 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-scripts" (OuterVolumeSpecName: "scripts") pod "3214f46e-82ed-43c6-90ab-e3c001ddb38c" (UID: "3214f46e-82ed-43c6-90ab-e3c001ddb38c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:06 crc kubenswrapper[4768]: I0223 18:52:06.794223 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3214f46e-82ed-43c6-90ab-e3c001ddb38c-kube-api-access-c4gf2" (OuterVolumeSpecName: "kube-api-access-c4gf2") pod "3214f46e-82ed-43c6-90ab-e3c001ddb38c" (UID: "3214f46e-82ed-43c6-90ab-e3c001ddb38c"). InnerVolumeSpecName "kube-api-access-c4gf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:52:06 crc kubenswrapper[4768]: I0223 18:52:06.814307 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3214f46e-82ed-43c6-90ab-e3c001ddb38c" (UID: "3214f46e-82ed-43c6-90ab-e3c001ddb38c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:06 crc kubenswrapper[4768]: I0223 18:52:06.835475 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-config-data" (OuterVolumeSpecName: "config-data") pod "3214f46e-82ed-43c6-90ab-e3c001ddb38c" (UID: "3214f46e-82ed-43c6-90ab-e3c001ddb38c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:06 crc kubenswrapper[4768]: I0223 18:52:06.889939 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4gf2\" (UniqueName: \"kubernetes.io/projected/3214f46e-82ed-43c6-90ab-e3c001ddb38c-kube-api-access-c4gf2\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:06 crc kubenswrapper[4768]: I0223 18:52:06.889982 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:06 crc kubenswrapper[4768]: I0223 18:52:06.889998 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:06 crc kubenswrapper[4768]: I0223 18:52:06.890013 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3214f46e-82ed-43c6-90ab-e3c001ddb38c-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.289552 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-x7zx9" event={"ID":"3214f46e-82ed-43c6-90ab-e3c001ddb38c","Type":"ContainerDied","Data":"894ea15a1af104f9bd477e0ce151541ea22c23129898cf805d51d129fffdf3c7"} Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.289601 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="894ea15a1af104f9bd477e0ce151541ea22c23129898cf805d51d129fffdf3c7" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.289622 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-x7zx9" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.444704 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 18:52:07 crc kubenswrapper[4768]: E0223 18:52:07.445369 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon-log" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.445394 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon-log" Feb 23 18:52:07 crc kubenswrapper[4768]: E0223 18:52:07.445408 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.445418 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon" Feb 23 18:52:07 crc kubenswrapper[4768]: E0223 18:52:07.445451 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3214f46e-82ed-43c6-90ab-e3c001ddb38c" containerName="nova-cell0-conductor-db-sync" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.445458 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3214f46e-82ed-43c6-90ab-e3c001ddb38c" containerName="nova-cell0-conductor-db-sync" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.445656 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3214f46e-82ed-43c6-90ab-e3c001ddb38c" containerName="nova-cell0-conductor-db-sync" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.445674 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.445691 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c46ebaa2-3910-4025-8420-71eb83b3a909" containerName="horizon-log" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.446722 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.451930 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.452237 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-r828c" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.464548 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.502203 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22014113-0a8e-4444-b685-5ab40ffc8402-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"22014113-0a8e-4444-b685-5ab40ffc8402\") " pod="openstack/nova-cell0-conductor-0" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.502290 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22014113-0a8e-4444-b685-5ab40ffc8402-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"22014113-0a8e-4444-b685-5ab40ffc8402\") " pod="openstack/nova-cell0-conductor-0" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.502343 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqf6h\" (UniqueName: \"kubernetes.io/projected/22014113-0a8e-4444-b685-5ab40ffc8402-kube-api-access-fqf6h\") pod \"nova-cell0-conductor-0\" (UID: \"22014113-0a8e-4444-b685-5ab40ffc8402\") " pod="openstack/nova-cell0-conductor-0" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.604207 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22014113-0a8e-4444-b685-5ab40ffc8402-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"22014113-0a8e-4444-b685-5ab40ffc8402\") " pod="openstack/nova-cell0-conductor-0" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.604577 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22014113-0a8e-4444-b685-5ab40ffc8402-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"22014113-0a8e-4444-b685-5ab40ffc8402\") " pod="openstack/nova-cell0-conductor-0" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.604691 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqf6h\" (UniqueName: \"kubernetes.io/projected/22014113-0a8e-4444-b685-5ab40ffc8402-kube-api-access-fqf6h\") pod \"nova-cell0-conductor-0\" (UID: \"22014113-0a8e-4444-b685-5ab40ffc8402\") " pod="openstack/nova-cell0-conductor-0" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.613242 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22014113-0a8e-4444-b685-5ab40ffc8402-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"22014113-0a8e-4444-b685-5ab40ffc8402\") " pod="openstack/nova-cell0-conductor-0" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.613587 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/22014113-0a8e-4444-b685-5ab40ffc8402-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"22014113-0a8e-4444-b685-5ab40ffc8402\") " pod="openstack/nova-cell0-conductor-0" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.637811 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqf6h\" (UniqueName: \"kubernetes.io/projected/22014113-0a8e-4444-b685-5ab40ffc8402-kube-api-access-fqf6h\") pod \"nova-cell0-conductor-0\" (UID: \"22014113-0a8e-4444-b685-5ab40ffc8402\") " pod="openstack/nova-cell0-conductor-0" Feb 23 18:52:07 crc kubenswrapper[4768]: I0223 18:52:07.778656 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 23 18:52:08 crc kubenswrapper[4768]: I0223 18:52:08.347143 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 23 18:52:08 crc kubenswrapper[4768]: W0223 18:52:08.352328 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22014113_0a8e_4444_b685_5ab40ffc8402.slice/crio-e929fb4173aa0178ea1acea263e9d097d3bbbec0f52ede856fad4db2c6969eb9 WatchSource:0}: Error finding container e929fb4173aa0178ea1acea263e9d097d3bbbec0f52ede856fad4db2c6969eb9: Status 404 returned error can't find the container with id e929fb4173aa0178ea1acea263e9d097d3bbbec0f52ede856fad4db2c6969eb9 Feb 23 18:52:09 crc kubenswrapper[4768]: I0223 18:52:09.324746 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 23 18:52:09 crc kubenswrapper[4768]: I0223 18:52:09.325283 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"22014113-0a8e-4444-b685-5ab40ffc8402","Type":"ContainerStarted","Data":"031fb3a2b1ad0407ce1f99ecd423e2cd3f393612281583295dff96bb6643430f"} Feb 23 18:52:09 crc kubenswrapper[4768]: I0223 18:52:09.325318 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"22014113-0a8e-4444-b685-5ab40ffc8402","Type":"ContainerStarted","Data":"e929fb4173aa0178ea1acea263e9d097d3bbbec0f52ede856fad4db2c6969eb9"} Feb 23 18:52:09 crc kubenswrapper[4768]: I0223 18:52:09.350385 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.350346711 podStartE2EDuration="2.350346711s" podCreationTimestamp="2026-02-23 18:52:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:52:09.335112094 +0000 UTC m=+1124.725597934" watchObservedRunningTime="2026-02-23 18:52:09.350346711 +0000 UTC m=+1124.740832541" Feb 23 18:52:14 crc kubenswrapper[4768]: I0223 18:52:14.388336 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 18:52:14 crc kubenswrapper[4768]: I0223 18:52:14.394326 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 23 18:52:17 crc kubenswrapper[4768]: I0223 18:52:17.819293 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.369466 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-gxk2h"] Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.371271 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.374956 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.375053 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.398286 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gxk2h"] Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.478411 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-scripts\") pod \"nova-cell0-cell-mapping-gxk2h\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.479067 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rxd9\" (UniqueName: \"kubernetes.io/projected/a444bad7-3d6c-4bf7-9426-db8a387f87ac-kube-api-access-2rxd9\") pod \"nova-cell0-cell-mapping-gxk2h\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.479175 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gxk2h\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.479634 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-config-data\") pod \"nova-cell0-cell-mapping-gxk2h\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.582134 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rxd9\" (UniqueName: \"kubernetes.io/projected/a444bad7-3d6c-4bf7-9426-db8a387f87ac-kube-api-access-2rxd9\") pod \"nova-cell0-cell-mapping-gxk2h\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.582284 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gxk2h\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.582321 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-config-data\") pod \"nova-cell0-cell-mapping-gxk2h\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.583170 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-scripts\") pod \"nova-cell0-cell-mapping-gxk2h\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.589492 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-gxk2h\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.595306 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-config-data\") pod \"nova-cell0-cell-mapping-gxk2h\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.596870 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-scripts\") pod \"nova-cell0-cell-mapping-gxk2h\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.633149 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rxd9\" (UniqueName: \"kubernetes.io/projected/a444bad7-3d6c-4bf7-9426-db8a387f87ac-kube-api-access-2rxd9\") pod \"nova-cell0-cell-mapping-gxk2h\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.646336 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.648543 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.655042 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.661820 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.664010 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.666089 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.679159 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.701390 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.701934 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.789267 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dda515a-61a3-46ba-8946-d849a955aa0a-logs\") pod \"nova-api-0\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " pod="openstack/nova-api-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.789338 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dda515a-61a3-46ba-8946-d849a955aa0a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " pod="openstack/nova-api-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.789365 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906a0a58-70bc-494b-b608-0b9d727cc5be-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " pod="openstack/nova-metadata-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.789401 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906a0a58-70bc-494b-b608-0b9d727cc5be-config-data\") pod \"nova-metadata-0\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " pod="openstack/nova-metadata-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.789428 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dda515a-61a3-46ba-8946-d849a955aa0a-config-data\") pod \"nova-api-0\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " pod="openstack/nova-api-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.789471 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqlxr\" (UniqueName: \"kubernetes.io/projected/906a0a58-70bc-494b-b608-0b9d727cc5be-kube-api-access-kqlxr\") pod \"nova-metadata-0\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " pod="openstack/nova-metadata-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.789516 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906a0a58-70bc-494b-b608-0b9d727cc5be-logs\") pod \"nova-metadata-0\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " pod="openstack/nova-metadata-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.789545 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lnjv\" (UniqueName: \"kubernetes.io/projected/9dda515a-61a3-46ba-8946-d849a955aa0a-kube-api-access-7lnjv\") pod \"nova-api-0\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " pod="openstack/nova-api-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.807313 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.808574 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.817743 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.826508 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ckv2h"] Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.827914 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.855364 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.883057 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ckv2h"] Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.892916 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dda515a-61a3-46ba-8946-d849a955aa0a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " pod="openstack/nova-api-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.892976 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906a0a58-70bc-494b-b608-0b9d727cc5be-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " pod="openstack/nova-metadata-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.893029 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906a0a58-70bc-494b-b608-0b9d727cc5be-config-data\") pod \"nova-metadata-0\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " pod="openstack/nova-metadata-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.893065 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dda515a-61a3-46ba-8946-d849a955aa0a-config-data\") pod \"nova-api-0\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " pod="openstack/nova-api-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.893117 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.893139 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9qks\" (UniqueName: \"kubernetes.io/projected/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-kube-api-access-w9qks\") pod \"nova-scheduler-0\" (UID: \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.893188 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqlxr\" (UniqueName: \"kubernetes.io/projected/906a0a58-70bc-494b-b608-0b9d727cc5be-kube-api-access-kqlxr\") pod \"nova-metadata-0\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " pod="openstack/nova-metadata-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.893239 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906a0a58-70bc-494b-b608-0b9d727cc5be-logs\") pod \"nova-metadata-0\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " pod="openstack/nova-metadata-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.893302 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lnjv\" (UniqueName: \"kubernetes.io/projected/9dda515a-61a3-46ba-8946-d849a955aa0a-kube-api-access-7lnjv\") pod \"nova-api-0\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " pod="openstack/nova-api-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.893361 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dda515a-61a3-46ba-8946-d849a955aa0a-logs\") pod \"nova-api-0\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " pod="openstack/nova-api-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.893387 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-config-data\") pod \"nova-scheduler-0\" (UID: \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.899188 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906a0a58-70bc-494b-b608-0b9d727cc5be-logs\") pod \"nova-metadata-0\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " pod="openstack/nova-metadata-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.900909 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dda515a-61a3-46ba-8946-d849a955aa0a-logs\") pod \"nova-api-0\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " pod="openstack/nova-api-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.906273 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dda515a-61a3-46ba-8946-d849a955aa0a-config-data\") pod \"nova-api-0\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " pod="openstack/nova-api-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.906904 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dda515a-61a3-46ba-8946-d849a955aa0a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " pod="openstack/nova-api-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.907493 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906a0a58-70bc-494b-b608-0b9d727cc5be-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " pod="openstack/nova-metadata-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.907535 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.908814 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.912536 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.912610 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906a0a58-70bc-494b-b608-0b9d727cc5be-config-data\") pod \"nova-metadata-0\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " pod="openstack/nova-metadata-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.921679 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqlxr\" (UniqueName: \"kubernetes.io/projected/906a0a58-70bc-494b-b608-0b9d727cc5be-kube-api-access-kqlxr\") pod \"nova-metadata-0\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " pod="openstack/nova-metadata-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.956871 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lnjv\" (UniqueName: \"kubernetes.io/projected/9dda515a-61a3-46ba-8946-d849a955aa0a-kube-api-access-7lnjv\") pod \"nova-api-0\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " pod="openstack/nova-api-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.964808 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.995111 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.996529 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.996672 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.996758 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d46346-7b49-48a8-995f-a9e01ac5185b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2d46346-7b49-48a8-995f-a9e01ac5185b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.996865 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.996953 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9qks\" (UniqueName: \"kubernetes.io/projected/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-kube-api-access-w9qks\") pod \"nova-scheduler-0\" (UID: \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.997073 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-config\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.997235 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d46346-7b49-48a8-995f-a9e01ac5185b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2d46346-7b49-48a8-995f-a9e01ac5185b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.997348 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd6qn\" (UniqueName: \"kubernetes.io/projected/95585638-93ab-482b-8618-20d1e1d2b01b-kube-api-access-nd6qn\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.997443 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcw52\" (UniqueName: \"kubernetes.io/projected/c2d46346-7b49-48a8-995f-a9e01ac5185b-kube-api-access-wcw52\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2d46346-7b49-48a8-995f-a9e01ac5185b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.997546 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-config-data\") pod \"nova-scheduler-0\" (UID: \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:18 crc kubenswrapper[4768]: I0223 18:52:18.997622 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.002981 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-config-data\") pod \"nova-scheduler-0\" (UID: \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.010978 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.013361 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9qks\" (UniqueName: \"kubernetes.io/projected/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-kube-api-access-w9qks\") pod \"nova-scheduler-0\" (UID: \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.017759 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.037232 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.109484 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.109556 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.109591 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.109609 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d46346-7b49-48a8-995f-a9e01ac5185b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2d46346-7b49-48a8-995f-a9e01ac5185b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.109646 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-config\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.109721 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d46346-7b49-48a8-995f-a9e01ac5185b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2d46346-7b49-48a8-995f-a9e01ac5185b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.109760 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd6qn\" (UniqueName: \"kubernetes.io/projected/95585638-93ab-482b-8618-20d1e1d2b01b-kube-api-access-nd6qn\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.109877 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcw52\" (UniqueName: \"kubernetes.io/projected/c2d46346-7b49-48a8-995f-a9e01ac5185b-kube-api-access-wcw52\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2d46346-7b49-48a8-995f-a9e01ac5185b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.109918 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.111036 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.111622 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.112269 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.112565 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-config\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.114899 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.122298 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d46346-7b49-48a8-995f-a9e01ac5185b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2d46346-7b49-48a8-995f-a9e01ac5185b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.129541 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d46346-7b49-48a8-995f-a9e01ac5185b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2d46346-7b49-48a8-995f-a9e01ac5185b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.133149 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd6qn\" (UniqueName: \"kubernetes.io/projected/95585638-93ab-482b-8618-20d1e1d2b01b-kube-api-access-nd6qn\") pod \"dnsmasq-dns-757b4f8459-ckv2h\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.137284 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcw52\" (UniqueName: \"kubernetes.io/projected/c2d46346-7b49-48a8-995f-a9e01ac5185b-kube-api-access-wcw52\") pod \"nova-cell1-novncproxy-0\" (UID: \"c2d46346-7b49-48a8-995f-a9e01ac5185b\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.190301 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.267067 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.277883 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.479372 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.537968 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-gxk2h"] Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.744402 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f6mfc"] Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.746224 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.749580 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.752016 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f6mfc"] Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.756926 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.827726 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tvrb\" (UniqueName: \"kubernetes.io/projected/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-kube-api-access-6tvrb\") pod \"nova-cell1-conductor-db-sync-f6mfc\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.827806 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-f6mfc\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.827850 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-scripts\") pod \"nova-cell1-conductor-db-sync-f6mfc\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.827892 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-config-data\") pod \"nova-cell1-conductor-db-sync-f6mfc\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.834850 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.929565 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-scripts\") pod \"nova-cell1-conductor-db-sync-f6mfc\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.929638 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-config-data\") pod \"nova-cell1-conductor-db-sync-f6mfc\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.929788 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tvrb\" (UniqueName: \"kubernetes.io/projected/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-kube-api-access-6tvrb\") pod \"nova-cell1-conductor-db-sync-f6mfc\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.929851 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-f6mfc\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.941276 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-f6mfc\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.952398 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-scripts\") pod \"nova-cell1-conductor-db-sync-f6mfc\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.965173 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tvrb\" (UniqueName: \"kubernetes.io/projected/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-kube-api-access-6tvrb\") pod \"nova-cell1-conductor-db-sync-f6mfc\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:19 crc kubenswrapper[4768]: I0223 18:52:19.969278 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-config-data\") pod \"nova-cell1-conductor-db-sync-f6mfc\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.036949 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.046240 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.177129 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ckv2h"] Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.207633 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.509204 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"906a0a58-70bc-494b-b608-0b9d727cc5be","Type":"ContainerStarted","Data":"9ba1b0fa1a9d8322d7cf50b4232857e7f37cea35014e80e8f098c5b34196108b"} Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.511535 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gxk2h" event={"ID":"a444bad7-3d6c-4bf7-9426-db8a387f87ac","Type":"ContainerStarted","Data":"70d1adc8b624176eecb1a26f13dc7bfc98c95e720ce4ed51a82dcdbd9a259c9b"} Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.511572 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gxk2h" event={"ID":"a444bad7-3d6c-4bf7-9426-db8a387f87ac","Type":"ContainerStarted","Data":"5f75e2ae9aecfe39f89cf5f6e39b789a10716024e4bd5f2ace45da37cf681b40"} Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.519728 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c2d46346-7b49-48a8-995f-a9e01ac5185b","Type":"ContainerStarted","Data":"8b1cd89a17ac02d821171c817481f72599b8a45ba7d493cf2662e6aa5fa9ab6a"} Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.522997 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dda515a-61a3-46ba-8946-d849a955aa0a","Type":"ContainerStarted","Data":"a5ee5fe1c042f8d0e3b5bc4eb2c0e1e9950936814fc9311dca130e53a33e8c6b"} Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.524515 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"51a8265d-41d6-4bdf-a3e8-cb8ade072b45","Type":"ContainerStarted","Data":"83b33e4798afc4dac9e08dcd5df9caa5d1451bdf7daf635fcf669ee790298834"} Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.527002 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" event={"ID":"95585638-93ab-482b-8618-20d1e1d2b01b","Type":"ContainerStarted","Data":"049c55e34f4202e21adfaa9c3283e810621bca484b8d06089d147226159139f5"} Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.527033 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" event={"ID":"95585638-93ab-482b-8618-20d1e1d2b01b","Type":"ContainerStarted","Data":"1053353b5e81cd658c3996d2997449f26761a182b28ea790216bc54d1f544784"} Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.534159 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-gxk2h" podStartSLOduration=2.5341227120000003 podStartE2EDuration="2.534122712s" podCreationTimestamp="2026-02-23 18:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:52:20.532427896 +0000 UTC m=+1135.922913716" watchObservedRunningTime="2026-02-23 18:52:20.534122712 +0000 UTC m=+1135.924608532" Feb 23 18:52:20 crc kubenswrapper[4768]: I0223 18:52:20.677564 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f6mfc"] Feb 23 18:52:21 crc kubenswrapper[4768]: I0223 18:52:21.544213 4768 generic.go:334] "Generic (PLEG): container finished" podID="95585638-93ab-482b-8618-20d1e1d2b01b" containerID="049c55e34f4202e21adfaa9c3283e810621bca484b8d06089d147226159139f5" exitCode=0 Feb 23 18:52:21 crc kubenswrapper[4768]: I0223 18:52:21.544400 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" event={"ID":"95585638-93ab-482b-8618-20d1e1d2b01b","Type":"ContainerDied","Data":"049c55e34f4202e21adfaa9c3283e810621bca484b8d06089d147226159139f5"} Feb 23 18:52:21 crc kubenswrapper[4768]: I0223 18:52:21.544647 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" event={"ID":"95585638-93ab-482b-8618-20d1e1d2b01b","Type":"ContainerStarted","Data":"b747ef83a56779637975ed5d96d012c32b91d295deda16d4037d359aab91e76a"} Feb 23 18:52:21 crc kubenswrapper[4768]: I0223 18:52:21.544714 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:21 crc kubenswrapper[4768]: I0223 18:52:21.548422 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f6mfc" event={"ID":"bddc6e4f-e0b0-4343-85c5-d77aa92d190c","Type":"ContainerStarted","Data":"4fe34a3364c304da503e4c8404e441842558dd8a8622e327b71edbcde95226f0"} Feb 23 18:52:21 crc kubenswrapper[4768]: I0223 18:52:21.548458 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f6mfc" event={"ID":"bddc6e4f-e0b0-4343-85c5-d77aa92d190c","Type":"ContainerStarted","Data":"a66cb812ac952e40e1a5e42186f781aa13a4c2ca41dd75562f1167cede19ce29"} Feb 23 18:52:21 crc kubenswrapper[4768]: I0223 18:52:21.578973 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" podStartSLOduration=3.578942108 podStartE2EDuration="3.578942108s" podCreationTimestamp="2026-02-23 18:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:52:21.566036104 +0000 UTC m=+1136.956521934" watchObservedRunningTime="2026-02-23 18:52:21.578942108 +0000 UTC m=+1136.969427908" Feb 23 18:52:21 crc kubenswrapper[4768]: I0223 18:52:21.590856 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-f6mfc" podStartSLOduration=2.590837033 podStartE2EDuration="2.590837033s" podCreationTimestamp="2026-02-23 18:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:52:21.588925221 +0000 UTC m=+1136.979411021" watchObservedRunningTime="2026-02-23 18:52:21.590837033 +0000 UTC m=+1136.981322843" Feb 23 18:52:22 crc kubenswrapper[4768]: I0223 18:52:22.953648 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 18:52:22 crc kubenswrapper[4768]: I0223 18:52:22.965072 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.585908 4768 generic.go:334] "Generic (PLEG): container finished" podID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerID="56ffb163f59ebb5bf713213637c810e8ad4e331fabc341f4f799af56391cf1e5" exitCode=137 Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.585990 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"effaa5c0-5154-4b0b-b231-4fb61bf4d011","Type":"ContainerDied","Data":"56ffb163f59ebb5bf713213637c810e8ad4e331fabc341f4f799af56391cf1e5"} Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.609476 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.744995 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-combined-ca-bundle\") pod \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.745053 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-config-data\") pod \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.745121 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/effaa5c0-5154-4b0b-b231-4fb61bf4d011-log-httpd\") pod \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.745160 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-scripts\") pod \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.745197 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-sg-core-conf-yaml\") pod \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.745274 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/effaa5c0-5154-4b0b-b231-4fb61bf4d011-run-httpd\") pod \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.745434 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcv69\" (UniqueName: \"kubernetes.io/projected/effaa5c0-5154-4b0b-b231-4fb61bf4d011-kube-api-access-lcv69\") pod \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\" (UID: \"effaa5c0-5154-4b0b-b231-4fb61bf4d011\") " Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.746968 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/effaa5c0-5154-4b0b-b231-4fb61bf4d011-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "effaa5c0-5154-4b0b-b231-4fb61bf4d011" (UID: "effaa5c0-5154-4b0b-b231-4fb61bf4d011"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.752013 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-scripts" (OuterVolumeSpecName: "scripts") pod "effaa5c0-5154-4b0b-b231-4fb61bf4d011" (UID: "effaa5c0-5154-4b0b-b231-4fb61bf4d011"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.752379 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/effaa5c0-5154-4b0b-b231-4fb61bf4d011-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "effaa5c0-5154-4b0b-b231-4fb61bf4d011" (UID: "effaa5c0-5154-4b0b-b231-4fb61bf4d011"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.752649 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/effaa5c0-5154-4b0b-b231-4fb61bf4d011-kube-api-access-lcv69" (OuterVolumeSpecName: "kube-api-access-lcv69") pod "effaa5c0-5154-4b0b-b231-4fb61bf4d011" (UID: "effaa5c0-5154-4b0b-b231-4fb61bf4d011"). InnerVolumeSpecName "kube-api-access-lcv69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.796774 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "effaa5c0-5154-4b0b-b231-4fb61bf4d011" (UID: "effaa5c0-5154-4b0b-b231-4fb61bf4d011"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.848331 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcv69\" (UniqueName: \"kubernetes.io/projected/effaa5c0-5154-4b0b-b231-4fb61bf4d011-kube-api-access-lcv69\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.848858 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/effaa5c0-5154-4b0b-b231-4fb61bf4d011-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.848882 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.848898 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.848911 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/effaa5c0-5154-4b0b-b231-4fb61bf4d011-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.883717 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "effaa5c0-5154-4b0b-b231-4fb61bf4d011" (UID: "effaa5c0-5154-4b0b-b231-4fb61bf4d011"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.922345 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-config-data" (OuterVolumeSpecName: "config-data") pod "effaa5c0-5154-4b0b-b231-4fb61bf4d011" (UID: "effaa5c0-5154-4b0b-b231-4fb61bf4d011"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.951216 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:23 crc kubenswrapper[4768]: I0223 18:52:23.951302 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/effaa5c0-5154-4b0b-b231-4fb61bf4d011-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.607035 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"906a0a58-70bc-494b-b608-0b9d727cc5be","Type":"ContainerStarted","Data":"475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e"} Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.607109 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"906a0a58-70bc-494b-b608-0b9d727cc5be","Type":"ContainerStarted","Data":"113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161"} Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.607343 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="906a0a58-70bc-494b-b608-0b9d727cc5be" containerName="nova-metadata-log" containerID="cri-o://475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e" gracePeriod=30 Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.607499 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="906a0a58-70bc-494b-b608-0b9d727cc5be" containerName="nova-metadata-metadata" containerID="cri-o://113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161" gracePeriod=30 Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.613598 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c2d46346-7b49-48a8-995f-a9e01ac5185b","Type":"ContainerStarted","Data":"6cabbf3aa3c7b7e17141e8da2390d37c9bada0f18ad9a226a7df43a3d0354912"} Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.614071 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="c2d46346-7b49-48a8-995f-a9e01ac5185b" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://6cabbf3aa3c7b7e17141e8da2390d37c9bada0f18ad9a226a7df43a3d0354912" gracePeriod=30 Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.620213 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dda515a-61a3-46ba-8946-d849a955aa0a","Type":"ContainerStarted","Data":"19d6847ac07978f03d5b4a9de21b2e7bd0f2f24b69c43291e4ad86142f01cc25"} Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.620312 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dda515a-61a3-46ba-8946-d849a955aa0a","Type":"ContainerStarted","Data":"88cc5b043d52aa4014786482117e2894b96d75265c4c4aa6796778359f163fd1"} Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.626061 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"51a8265d-41d6-4bdf-a3e8-cb8ade072b45","Type":"ContainerStarted","Data":"bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8"} Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.633367 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"effaa5c0-5154-4b0b-b231-4fb61bf4d011","Type":"ContainerDied","Data":"142bfb18a59324755dd4ef8f6d3573838b3213bacf4f0f12858e8ef7dd5746c6"} Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.633433 4768 scope.go:117] "RemoveContainer" containerID="56ffb163f59ebb5bf713213637c810e8ad4e331fabc341f4f799af56391cf1e5" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.633612 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.657849 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.366593903 podStartE2EDuration="6.657817015s" podCreationTimestamp="2026-02-23 18:52:18 +0000 UTC" firstStartedPulling="2026-02-23 18:52:19.840770106 +0000 UTC m=+1135.231255906" lastFinishedPulling="2026-02-23 18:52:23.131993218 +0000 UTC m=+1138.522479018" observedRunningTime="2026-02-23 18:52:24.648018608 +0000 UTC m=+1140.038504428" watchObservedRunningTime="2026-02-23 18:52:24.657817015 +0000 UTC m=+1140.048302815" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.688730 4768 scope.go:117] "RemoveContainer" containerID="147974c5d553fef42a359d095b42274ad45c5f007024f29dac40be36dcebe088" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.724681 4768 scope.go:117] "RemoveContainer" containerID="b3c37db8e7b8b82ff3680d2c3dd113fbc2e2c4889a52ecdb49395f2756b585f9" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.724628 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.116578256 podStartE2EDuration="6.724596181s" podCreationTimestamp="2026-02-23 18:52:18 +0000 UTC" firstStartedPulling="2026-02-23 18:52:19.533738501 +0000 UTC m=+1134.924224301" lastFinishedPulling="2026-02-23 18:52:23.141756426 +0000 UTC m=+1138.532242226" observedRunningTime="2026-02-23 18:52:24.679631532 +0000 UTC m=+1140.070117342" watchObservedRunningTime="2026-02-23 18:52:24.724596181 +0000 UTC m=+1140.115082001" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.727892 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.644181492 podStartE2EDuration="6.727876341s" podCreationTimestamp="2026-02-23 18:52:18 +0000 UTC" firstStartedPulling="2026-02-23 18:52:20.04831972 +0000 UTC m=+1135.438805520" lastFinishedPulling="2026-02-23 18:52:23.132014559 +0000 UTC m=+1138.522500369" observedRunningTime="2026-02-23 18:52:24.70261177 +0000 UTC m=+1140.093097570" watchObservedRunningTime="2026-02-23 18:52:24.727876341 +0000 UTC m=+1140.118362151" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.769431 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.684511654 podStartE2EDuration="6.769403376s" podCreationTimestamp="2026-02-23 18:52:18 +0000 UTC" firstStartedPulling="2026-02-23 18:52:20.048530396 +0000 UTC m=+1135.439016186" lastFinishedPulling="2026-02-23 18:52:23.133422118 +0000 UTC m=+1138.523907908" observedRunningTime="2026-02-23 18:52:24.73477039 +0000 UTC m=+1140.125256190" watchObservedRunningTime="2026-02-23 18:52:24.769403376 +0000 UTC m=+1140.159889176" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.817747 4768 scope.go:117] "RemoveContainer" containerID="4e581377185f7fe6621082436542c8a2b2f88f735ef046b04199d1d57500e16b" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.818788 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:52:24 crc kubenswrapper[4768]: E0223 18:52:24.830030 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906a0a58_70bc_494b_b608_0b9d727cc5be.slice/crio-475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e.scope\": RecentStats: unable to find data in memory cache]" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.830407 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.844094 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:52:24 crc kubenswrapper[4768]: E0223 18:52:24.844886 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="ceilometer-notification-agent" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.844910 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="ceilometer-notification-agent" Feb 23 18:52:24 crc kubenswrapper[4768]: E0223 18:52:24.844934 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="ceilometer-central-agent" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.844940 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="ceilometer-central-agent" Feb 23 18:52:24 crc kubenswrapper[4768]: E0223 18:52:24.844952 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="sg-core" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.844959 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="sg-core" Feb 23 18:52:24 crc kubenswrapper[4768]: E0223 18:52:24.844969 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="proxy-httpd" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.844976 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="proxy-httpd" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.845205 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="proxy-httpd" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.845220 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="sg-core" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.845235 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="ceilometer-central-agent" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.845267 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" containerName="ceilometer-notification-agent" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.847489 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.854847 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.856216 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.864066 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.975219 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-log-httpd\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.975370 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-scripts\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.975564 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.975753 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-run-httpd\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.975858 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.975932 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d42dw\" (UniqueName: \"kubernetes.io/projected/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-kube-api-access-d42dw\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:24 crc kubenswrapper[4768]: I0223 18:52:24.976232 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-config-data\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.078168 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-log-httpd\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.078291 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-scripts\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.078327 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.078366 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-run-httpd\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.078391 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.078413 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d42dw\" (UniqueName: \"kubernetes.io/projected/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-kube-api-access-d42dw\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.078472 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-config-data\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.078965 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-log-httpd\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.079482 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-run-httpd\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.084925 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.085366 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.096147 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-scripts\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.099155 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d42dw\" (UniqueName: \"kubernetes.io/projected/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-kube-api-access-d42dw\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.100410 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-config-data\") pod \"ceilometer-0\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.185842 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.344156 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="effaa5c0-5154-4b0b-b231-4fb61bf4d011" path="/var/lib/kubelet/pods/effaa5c0-5154-4b0b-b231-4fb61bf4d011/volumes" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.542224 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.588581 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906a0a58-70bc-494b-b608-0b9d727cc5be-logs\") pod \"906a0a58-70bc-494b-b608-0b9d727cc5be\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.588780 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906a0a58-70bc-494b-b608-0b9d727cc5be-combined-ca-bundle\") pod \"906a0a58-70bc-494b-b608-0b9d727cc5be\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.588834 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqlxr\" (UniqueName: \"kubernetes.io/projected/906a0a58-70bc-494b-b608-0b9d727cc5be-kube-api-access-kqlxr\") pod \"906a0a58-70bc-494b-b608-0b9d727cc5be\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.588884 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906a0a58-70bc-494b-b608-0b9d727cc5be-config-data\") pod \"906a0a58-70bc-494b-b608-0b9d727cc5be\" (UID: \"906a0a58-70bc-494b-b608-0b9d727cc5be\") " Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.590163 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/906a0a58-70bc-494b-b608-0b9d727cc5be-logs" (OuterVolumeSpecName: "logs") pod "906a0a58-70bc-494b-b608-0b9d727cc5be" (UID: "906a0a58-70bc-494b-b608-0b9d727cc5be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.598529 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906a0a58-70bc-494b-b608-0b9d727cc5be-kube-api-access-kqlxr" (OuterVolumeSpecName: "kube-api-access-kqlxr") pod "906a0a58-70bc-494b-b608-0b9d727cc5be" (UID: "906a0a58-70bc-494b-b608-0b9d727cc5be"). InnerVolumeSpecName "kube-api-access-kqlxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.641375 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906a0a58-70bc-494b-b608-0b9d727cc5be-config-data" (OuterVolumeSpecName: "config-data") pod "906a0a58-70bc-494b-b608-0b9d727cc5be" (UID: "906a0a58-70bc-494b-b608-0b9d727cc5be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.653228 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906a0a58-70bc-494b-b608-0b9d727cc5be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "906a0a58-70bc-494b-b608-0b9d727cc5be" (UID: "906a0a58-70bc-494b-b608-0b9d727cc5be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.671553 4768 generic.go:334] "Generic (PLEG): container finished" podID="906a0a58-70bc-494b-b608-0b9d727cc5be" containerID="113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161" exitCode=0 Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.671593 4768 generic.go:334] "Generic (PLEG): container finished" podID="906a0a58-70bc-494b-b608-0b9d727cc5be" containerID="475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e" exitCode=143 Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.672839 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.672985 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"906a0a58-70bc-494b-b608-0b9d727cc5be","Type":"ContainerDied","Data":"113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161"} Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.673031 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"906a0a58-70bc-494b-b608-0b9d727cc5be","Type":"ContainerDied","Data":"475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e"} Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.673042 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"906a0a58-70bc-494b-b608-0b9d727cc5be","Type":"ContainerDied","Data":"9ba1b0fa1a9d8322d7cf50b4232857e7f37cea35014e80e8f098c5b34196108b"} Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.673058 4768 scope.go:117] "RemoveContainer" containerID="113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.691603 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/906a0a58-70bc-494b-b608-0b9d727cc5be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.691654 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqlxr\" (UniqueName: \"kubernetes.io/projected/906a0a58-70bc-494b-b608-0b9d727cc5be-kube-api-access-kqlxr\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.691675 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/906a0a58-70bc-494b-b608-0b9d727cc5be-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.691690 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/906a0a58-70bc-494b-b608-0b9d727cc5be-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.733326 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.745325 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.759085 4768 scope.go:117] "RemoveContainer" containerID="475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.769940 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:25 crc kubenswrapper[4768]: E0223 18:52:25.770672 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906a0a58-70bc-494b-b608-0b9d727cc5be" containerName="nova-metadata-log" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.770695 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="906a0a58-70bc-494b-b608-0b9d727cc5be" containerName="nova-metadata-log" Feb 23 18:52:25 crc kubenswrapper[4768]: E0223 18:52:25.770737 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906a0a58-70bc-494b-b608-0b9d727cc5be" containerName="nova-metadata-metadata" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.770745 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="906a0a58-70bc-494b-b608-0b9d727cc5be" containerName="nova-metadata-metadata" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.770974 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="906a0a58-70bc-494b-b608-0b9d727cc5be" containerName="nova-metadata-metadata" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.771000 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="906a0a58-70bc-494b-b608-0b9d727cc5be" containerName="nova-metadata-log" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.772877 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.776480 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.776726 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.801146 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.812359 4768 scope.go:117] "RemoveContainer" containerID="113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.812492 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:52:25 crc kubenswrapper[4768]: E0223 18:52:25.813043 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161\": container with ID starting with 113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161 not found: ID does not exist" containerID="113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.813069 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161"} err="failed to get container status \"113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161\": rpc error: code = NotFound desc = could not find container \"113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161\": container with ID starting with 113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161 not found: ID does not exist" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.813091 4768 scope.go:117] "RemoveContainer" containerID="475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e" Feb 23 18:52:25 crc kubenswrapper[4768]: E0223 18:52:25.813827 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e\": container with ID starting with 475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e not found: ID does not exist" containerID="475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.813858 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e"} err="failed to get container status \"475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e\": rpc error: code = NotFound desc = could not find container \"475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e\": container with ID starting with 475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e not found: ID does not exist" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.813873 4768 scope.go:117] "RemoveContainer" containerID="113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.814295 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161"} err="failed to get container status \"113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161\": rpc error: code = NotFound desc = could not find container \"113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161\": container with ID starting with 113cb98c125c78cee61d0c34aa91618ee4f693a1ba6e35d286899fa8e5ffa161 not found: ID does not exist" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.814318 4768 scope.go:117] "RemoveContainer" containerID="475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.814642 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e"} err="failed to get container status \"475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e\": rpc error: code = NotFound desc = could not find container \"475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e\": container with ID starting with 475da31812388b99003a48d91fa1a5e94cd859b4b3b73842c1c6521983f9b84e not found: ID does not exist" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.897672 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.898165 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a8dae5e-9728-4dc3-9daf-2cca08405500-logs\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.898220 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkwh6\" (UniqueName: \"kubernetes.io/projected/5a8dae5e-9728-4dc3-9daf-2cca08405500-kube-api-access-gkwh6\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.898487 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:25 crc kubenswrapper[4768]: I0223 18:52:25.898849 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-config-data\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.000133 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-config-data\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.000197 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.000235 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a8dae5e-9728-4dc3-9daf-2cca08405500-logs\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.000328 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkwh6\" (UniqueName: \"kubernetes.io/projected/5a8dae5e-9728-4dc3-9daf-2cca08405500-kube-api-access-gkwh6\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.000446 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.001378 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a8dae5e-9728-4dc3-9daf-2cca08405500-logs\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.005569 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.005976 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-config-data\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.006341 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.017474 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkwh6\" (UniqueName: \"kubernetes.io/projected/5a8dae5e-9728-4dc3-9daf-2cca08405500-kube-api-access-gkwh6\") pod \"nova-metadata-0\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " pod="openstack/nova-metadata-0" Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.125846 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.650759 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.687089 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a8dae5e-9728-4dc3-9daf-2cca08405500","Type":"ContainerStarted","Data":"3fccbea9aadaab22b1d6a21c7b4b06ccbf41f79797b02670071a5ed92aa867ee"} Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.690516 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84d2708c-15ee-4b5e-aaeb-03ad646f3d51","Type":"ContainerStarted","Data":"16d4136df64171e122b94bdb6e39577ac17ecfa3eea3a79b297e824f987d3feb"} Feb 23 18:52:26 crc kubenswrapper[4768]: I0223 18:52:26.690550 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84d2708c-15ee-4b5e-aaeb-03ad646f3d51","Type":"ContainerStarted","Data":"9d5de070ce2fa8c399658c1ee7b346dd1231c5262d42923099037b7dcd27beba"} Feb 23 18:52:27 crc kubenswrapper[4768]: I0223 18:52:27.336106 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="906a0a58-70bc-494b-b608-0b9d727cc5be" path="/var/lib/kubelet/pods/906a0a58-70bc-494b-b608-0b9d727cc5be/volumes" Feb 23 18:52:27 crc kubenswrapper[4768]: I0223 18:52:27.704062 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a8dae5e-9728-4dc3-9daf-2cca08405500","Type":"ContainerStarted","Data":"7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f"} Feb 23 18:52:27 crc kubenswrapper[4768]: I0223 18:52:27.704117 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a8dae5e-9728-4dc3-9daf-2cca08405500","Type":"ContainerStarted","Data":"613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b"} Feb 23 18:52:27 crc kubenswrapper[4768]: I0223 18:52:27.707178 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84d2708c-15ee-4b5e-aaeb-03ad646f3d51","Type":"ContainerStarted","Data":"b2ec3d22432b90222a08e9de2cdd5960ee4e57b50f744c14bfb71ff4f4b109ec"} Feb 23 18:52:27 crc kubenswrapper[4768]: I0223 18:52:27.732833 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.732815808 podStartE2EDuration="2.732815808s" podCreationTimestamp="2026-02-23 18:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:52:27.723994507 +0000 UTC m=+1143.114480307" watchObservedRunningTime="2026-02-23 18:52:27.732815808 +0000 UTC m=+1143.123301608" Feb 23 18:52:28 crc kubenswrapper[4768]: I0223 18:52:28.721386 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84d2708c-15ee-4b5e-aaeb-03ad646f3d51","Type":"ContainerStarted","Data":"dbf86146514b711b77845dc670fc3d26cc0f1ddcbdd33ecd72804cb315215a9e"} Feb 23 18:52:28 crc kubenswrapper[4768]: I0223 18:52:28.724989 4768 generic.go:334] "Generic (PLEG): container finished" podID="bddc6e4f-e0b0-4343-85c5-d77aa92d190c" containerID="4fe34a3364c304da503e4c8404e441842558dd8a8622e327b71edbcde95226f0" exitCode=0 Feb 23 18:52:28 crc kubenswrapper[4768]: I0223 18:52:28.725074 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f6mfc" event={"ID":"bddc6e4f-e0b0-4343-85c5-d77aa92d190c","Type":"ContainerDied","Data":"4fe34a3364c304da503e4c8404e441842558dd8a8622e327b71edbcde95226f0"} Feb 23 18:52:28 crc kubenswrapper[4768]: I0223 18:52:28.727992 4768 generic.go:334] "Generic (PLEG): container finished" podID="a444bad7-3d6c-4bf7-9426-db8a387f87ac" containerID="70d1adc8b624176eecb1a26f13dc7bfc98c95e720ce4ed51a82dcdbd9a259c9b" exitCode=0 Feb 23 18:52:28 crc kubenswrapper[4768]: I0223 18:52:28.728067 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gxk2h" event={"ID":"a444bad7-3d6c-4bf7-9426-db8a387f87ac","Type":"ContainerDied","Data":"70d1adc8b624176eecb1a26f13dc7bfc98c95e720ce4ed51a82dcdbd9a259c9b"} Feb 23 18:52:29 crc kubenswrapper[4768]: I0223 18:52:29.018946 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 18:52:29 crc kubenswrapper[4768]: I0223 18:52:29.019080 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 18:52:29 crc kubenswrapper[4768]: I0223 18:52:29.190650 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 18:52:29 crc kubenswrapper[4768]: I0223 18:52:29.191132 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 18:52:29 crc kubenswrapper[4768]: I0223 18:52:29.229932 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 18:52:29 crc kubenswrapper[4768]: I0223 18:52:29.269541 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:52:29 crc kubenswrapper[4768]: I0223 18:52:29.278809 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:29 crc kubenswrapper[4768]: I0223 18:52:29.362317 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6tz6q"] Feb 23 18:52:29 crc kubenswrapper[4768]: I0223 18:52:29.362664 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" podUID="6e097cb1-8802-4e80-b5c9-6469c7387e0b" containerName="dnsmasq-dns" containerID="cri-o://9b702c8a2f49f152355850a0d01baa5a20f0166d677b2173f439d35a0566a116" gracePeriod=10 Feb 23 18:52:29 crc kubenswrapper[4768]: I0223 18:52:29.752776 4768 generic.go:334] "Generic (PLEG): container finished" podID="6e097cb1-8802-4e80-b5c9-6469c7387e0b" containerID="9b702c8a2f49f152355850a0d01baa5a20f0166d677b2173f439d35a0566a116" exitCode=0 Feb 23 18:52:29 crc kubenswrapper[4768]: I0223 18:52:29.754378 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" event={"ID":"6e097cb1-8802-4e80-b5c9-6469c7387e0b","Type":"ContainerDied","Data":"9b702c8a2f49f152355850a0d01baa5a20f0166d677b2173f439d35a0566a116"} Feb 23 18:52:29 crc kubenswrapper[4768]: I0223 18:52:29.796844 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 18:52:29 crc kubenswrapper[4768]: I0223 18:52:29.977708 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.102505 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9dda515a-61a3-46ba-8946-d849a955aa0a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.102984 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9dda515a-61a3-46ba-8946-d849a955aa0a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.123186 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-ovsdbserver-nb\") pod \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.123290 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-dns-svc\") pod \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.123387 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwfmn\" (UniqueName: \"kubernetes.io/projected/6e097cb1-8802-4e80-b5c9-6469c7387e0b-kube-api-access-nwfmn\") pod \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.123721 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-ovsdbserver-sb\") pod \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.123781 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-dns-swift-storage-0\") pod \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.123845 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-config\") pod \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\" (UID: \"6e097cb1-8802-4e80-b5c9-6469c7387e0b\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.135431 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e097cb1-8802-4e80-b5c9-6469c7387e0b-kube-api-access-nwfmn" (OuterVolumeSpecName: "kube-api-access-nwfmn") pod "6e097cb1-8802-4e80-b5c9-6469c7387e0b" (UID: "6e097cb1-8802-4e80-b5c9-6469c7387e0b"). InnerVolumeSpecName "kube-api-access-nwfmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.220566 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-config" (OuterVolumeSpecName: "config") pod "6e097cb1-8802-4e80-b5c9-6469c7387e0b" (UID: "6e097cb1-8802-4e80-b5c9-6469c7387e0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.226704 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwfmn\" (UniqueName: \"kubernetes.io/projected/6e097cb1-8802-4e80-b5c9-6469c7387e0b-kube-api-access-nwfmn\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.226733 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.233012 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6e097cb1-8802-4e80-b5c9-6469c7387e0b" (UID: "6e097cb1-8802-4e80-b5c9-6469c7387e0b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.254412 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6e097cb1-8802-4e80-b5c9-6469c7387e0b" (UID: "6e097cb1-8802-4e80-b5c9-6469c7387e0b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.261390 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6e097cb1-8802-4e80-b5c9-6469c7387e0b" (UID: "6e097cb1-8802-4e80-b5c9-6469c7387e0b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.263474 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.288701 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6e097cb1-8802-4e80-b5c9-6469c7387e0b" (UID: "6e097cb1-8802-4e80-b5c9-6469c7387e0b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.327014 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.327508 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-scripts\") pod \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.327578 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tvrb\" (UniqueName: \"kubernetes.io/projected/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-kube-api-access-6tvrb\") pod \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.327618 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-config-data\") pod \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.327693 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-combined-ca-bundle\") pod \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\" (UID: \"bddc6e4f-e0b0-4343-85c5-d77aa92d190c\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.328177 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.328198 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.328209 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.328221 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e097cb1-8802-4e80-b5c9-6469c7387e0b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.330829 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-kube-api-access-6tvrb" (OuterVolumeSpecName: "kube-api-access-6tvrb") pod "bddc6e4f-e0b0-4343-85c5-d77aa92d190c" (UID: "bddc6e4f-e0b0-4343-85c5-d77aa92d190c"). InnerVolumeSpecName "kube-api-access-6tvrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.340510 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-scripts" (OuterVolumeSpecName: "scripts") pod "bddc6e4f-e0b0-4343-85c5-d77aa92d190c" (UID: "bddc6e4f-e0b0-4343-85c5-d77aa92d190c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.362481 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bddc6e4f-e0b0-4343-85c5-d77aa92d190c" (UID: "bddc6e4f-e0b0-4343-85c5-d77aa92d190c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.385394 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-config-data" (OuterVolumeSpecName: "config-data") pod "bddc6e4f-e0b0-4343-85c5-d77aa92d190c" (UID: "bddc6e4f-e0b0-4343-85c5-d77aa92d190c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.428961 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rxd9\" (UniqueName: \"kubernetes.io/projected/a444bad7-3d6c-4bf7-9426-db8a387f87ac-kube-api-access-2rxd9\") pod \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.429029 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-combined-ca-bundle\") pod \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.429105 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-config-data\") pod \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.429305 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-scripts\") pod \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\" (UID: \"a444bad7-3d6c-4bf7-9426-db8a387f87ac\") " Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.429734 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.429751 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tvrb\" (UniqueName: \"kubernetes.io/projected/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-kube-api-access-6tvrb\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.429762 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.429771 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bddc6e4f-e0b0-4343-85c5-d77aa92d190c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.437678 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-scripts" (OuterVolumeSpecName: "scripts") pod "a444bad7-3d6c-4bf7-9426-db8a387f87ac" (UID: "a444bad7-3d6c-4bf7-9426-db8a387f87ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.437745 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a444bad7-3d6c-4bf7-9426-db8a387f87ac-kube-api-access-2rxd9" (OuterVolumeSpecName: "kube-api-access-2rxd9") pod "a444bad7-3d6c-4bf7-9426-db8a387f87ac" (UID: "a444bad7-3d6c-4bf7-9426-db8a387f87ac"). InnerVolumeSpecName "kube-api-access-2rxd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.461749 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a444bad7-3d6c-4bf7-9426-db8a387f87ac" (UID: "a444bad7-3d6c-4bf7-9426-db8a387f87ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.464317 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-config-data" (OuterVolumeSpecName: "config-data") pod "a444bad7-3d6c-4bf7-9426-db8a387f87ac" (UID: "a444bad7-3d6c-4bf7-9426-db8a387f87ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.531744 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.531784 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rxd9\" (UniqueName: \"kubernetes.io/projected/a444bad7-3d6c-4bf7-9426-db8a387f87ac-kube-api-access-2rxd9\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.531796 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.531805 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a444bad7-3d6c-4bf7-9426-db8a387f87ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.767178 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84d2708c-15ee-4b5e-aaeb-03ad646f3d51","Type":"ContainerStarted","Data":"bd184d47f4e40cc6044777ea6b14296e67f67e4626cf33d2c1dfef5cfbf70c7d"} Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.767238 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.773288 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-f6mfc" event={"ID":"bddc6e4f-e0b0-4343-85c5-d77aa92d190c","Type":"ContainerDied","Data":"a66cb812ac952e40e1a5e42186f781aa13a4c2ca41dd75562f1167cede19ce29"} Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.773324 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a66cb812ac952e40e1a5e42186f781aa13a4c2ca41dd75562f1167cede19ce29" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.773358 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-f6mfc" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.775204 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" event={"ID":"6e097cb1-8802-4e80-b5c9-6469c7387e0b","Type":"ContainerDied","Data":"79511cfaff42a02897b8522f6b2bede368de25f2c70971ddaf91cb343443c6c3"} Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.775284 4768 scope.go:117] "RemoveContainer" containerID="9b702c8a2f49f152355850a0d01baa5a20f0166d677b2173f439d35a0566a116" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.775421 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-6tz6q" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.783901 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-gxk2h" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.784627 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-gxk2h" event={"ID":"a444bad7-3d6c-4bf7-9426-db8a387f87ac","Type":"ContainerDied","Data":"5f75e2ae9aecfe39f89cf5f6e39b789a10716024e4bd5f2ace45da37cf681b40"} Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.784678 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f75e2ae9aecfe39f89cf5f6e39b789a10716024e4bd5f2ace45da37cf681b40" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.802494 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.9137388939999997 podStartE2EDuration="6.802469944s" podCreationTimestamp="2026-02-23 18:52:24 +0000 UTC" firstStartedPulling="2026-02-23 18:52:25.825045038 +0000 UTC m=+1141.215530838" lastFinishedPulling="2026-02-23 18:52:29.713776088 +0000 UTC m=+1145.104261888" observedRunningTime="2026-02-23 18:52:30.794717322 +0000 UTC m=+1146.185203122" watchObservedRunningTime="2026-02-23 18:52:30.802469944 +0000 UTC m=+1146.192955764" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.826592 4768 scope.go:117] "RemoveContainer" containerID="0ad3662694b2461246649c15915809702676b5b698bf913aed43a626ce92365f" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.837818 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 18:52:30 crc kubenswrapper[4768]: E0223 18:52:30.838366 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e097cb1-8802-4e80-b5c9-6469c7387e0b" containerName="init" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.838435 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e097cb1-8802-4e80-b5c9-6469c7387e0b" containerName="init" Feb 23 18:52:30 crc kubenswrapper[4768]: E0223 18:52:30.838503 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bddc6e4f-e0b0-4343-85c5-d77aa92d190c" containerName="nova-cell1-conductor-db-sync" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.838567 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="bddc6e4f-e0b0-4343-85c5-d77aa92d190c" containerName="nova-cell1-conductor-db-sync" Feb 23 18:52:30 crc kubenswrapper[4768]: E0223 18:52:30.838633 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a444bad7-3d6c-4bf7-9426-db8a387f87ac" containerName="nova-manage" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.838683 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a444bad7-3d6c-4bf7-9426-db8a387f87ac" containerName="nova-manage" Feb 23 18:52:30 crc kubenswrapper[4768]: E0223 18:52:30.838741 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e097cb1-8802-4e80-b5c9-6469c7387e0b" containerName="dnsmasq-dns" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.838789 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e097cb1-8802-4e80-b5c9-6469c7387e0b" containerName="dnsmasq-dns" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.839023 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e097cb1-8802-4e80-b5c9-6469c7387e0b" containerName="dnsmasq-dns" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.839085 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="bddc6e4f-e0b0-4343-85c5-d77aa92d190c" containerName="nova-cell1-conductor-db-sync" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.839169 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a444bad7-3d6c-4bf7-9426-db8a387f87ac" containerName="nova-manage" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.839936 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.841826 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.848868 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.860406 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6tz6q"] Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.869831 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-6tz6q"] Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.938929 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bf982c-902c-45c7-9620-095ec38e9b86-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"15bf982c-902c-45c7-9620-095ec38e9b86\") " pod="openstack/nova-cell1-conductor-0" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.938975 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drbhs\" (UniqueName: \"kubernetes.io/projected/15bf982c-902c-45c7-9620-095ec38e9b86-kube-api-access-drbhs\") pod \"nova-cell1-conductor-0\" (UID: \"15bf982c-902c-45c7-9620-095ec38e9b86\") " pod="openstack/nova-cell1-conductor-0" Feb 23 18:52:30 crc kubenswrapper[4768]: I0223 18:52:30.939000 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15bf982c-902c-45c7-9620-095ec38e9b86-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"15bf982c-902c-45c7-9620-095ec38e9b86\") " pod="openstack/nova-cell1-conductor-0" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.015941 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.016181 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9dda515a-61a3-46ba-8946-d849a955aa0a" containerName="nova-api-log" containerID="cri-o://88cc5b043d52aa4014786482117e2894b96d75265c4c4aa6796778359f163fd1" gracePeriod=30 Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.016355 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9dda515a-61a3-46ba-8946-d849a955aa0a" containerName="nova-api-api" containerID="cri-o://19d6847ac07978f03d5b4a9de21b2e7bd0f2f24b69c43291e4ad86142f01cc25" gracePeriod=30 Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.039039 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.040114 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drbhs\" (UniqueName: \"kubernetes.io/projected/15bf982c-902c-45c7-9620-095ec38e9b86-kube-api-access-drbhs\") pod \"nova-cell1-conductor-0\" (UID: \"15bf982c-902c-45c7-9620-095ec38e9b86\") " pod="openstack/nova-cell1-conductor-0" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.040156 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15bf982c-902c-45c7-9620-095ec38e9b86-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"15bf982c-902c-45c7-9620-095ec38e9b86\") " pod="openstack/nova-cell1-conductor-0" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.040357 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bf982c-902c-45c7-9620-095ec38e9b86-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"15bf982c-902c-45c7-9620-095ec38e9b86\") " pod="openstack/nova-cell1-conductor-0" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.044893 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15bf982c-902c-45c7-9620-095ec38e9b86-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"15bf982c-902c-45c7-9620-095ec38e9b86\") " pod="openstack/nova-cell1-conductor-0" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.045475 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15bf982c-902c-45c7-9620-095ec38e9b86-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"15bf982c-902c-45c7-9620-095ec38e9b86\") " pod="openstack/nova-cell1-conductor-0" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.051280 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.051561 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5a8dae5e-9728-4dc3-9daf-2cca08405500" containerName="nova-metadata-log" containerID="cri-o://613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b" gracePeriod=30 Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.051621 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5a8dae5e-9728-4dc3-9daf-2cca08405500" containerName="nova-metadata-metadata" containerID="cri-o://7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f" gracePeriod=30 Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.064334 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drbhs\" (UniqueName: \"kubernetes.io/projected/15bf982c-902c-45c7-9620-095ec38e9b86-kube-api-access-drbhs\") pod \"nova-cell1-conductor-0\" (UID: \"15bf982c-902c-45c7-9620-095ec38e9b86\") " pod="openstack/nova-cell1-conductor-0" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.126719 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.127068 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.238422 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.339360 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e097cb1-8802-4e80-b5c9-6469c7387e0b" path="/var/lib/kubelet/pods/6e097cb1-8802-4e80-b5c9-6469c7387e0b/volumes" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.821460 4768 generic.go:334] "Generic (PLEG): container finished" podID="9dda515a-61a3-46ba-8946-d849a955aa0a" containerID="88cc5b043d52aa4014786482117e2894b96d75265c4c4aa6796778359f163fd1" exitCode=143 Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.821894 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dda515a-61a3-46ba-8946-d849a955aa0a","Type":"ContainerDied","Data":"88cc5b043d52aa4014786482117e2894b96d75265c4c4aa6796778359f163fd1"} Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.824575 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.827020 4768 generic.go:334] "Generic (PLEG): container finished" podID="5a8dae5e-9728-4dc3-9daf-2cca08405500" containerID="7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f" exitCode=0 Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.827038 4768 generic.go:334] "Generic (PLEG): container finished" podID="5a8dae5e-9728-4dc3-9daf-2cca08405500" containerID="613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b" exitCode=143 Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.827079 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a8dae5e-9728-4dc3-9daf-2cca08405500","Type":"ContainerDied","Data":"7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f"} Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.827104 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a8dae5e-9728-4dc3-9daf-2cca08405500","Type":"ContainerDied","Data":"613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b"} Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.827115 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5a8dae5e-9728-4dc3-9daf-2cca08405500","Type":"ContainerDied","Data":"3fccbea9aadaab22b1d6a21c7b4b06ccbf41f79797b02670071a5ed92aa867ee"} Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.827132 4768 scope.go:117] "RemoveContainer" containerID="7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.849623 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="51a8265d-41d6-4bdf-a3e8-cb8ade072b45" containerName="nova-scheduler-scheduler" containerID="cri-o://bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8" gracePeriod=30 Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.860753 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkwh6\" (UniqueName: \"kubernetes.io/projected/5a8dae5e-9728-4dc3-9daf-2cca08405500-kube-api-access-gkwh6\") pod \"5a8dae5e-9728-4dc3-9daf-2cca08405500\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.860835 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-nova-metadata-tls-certs\") pod \"5a8dae5e-9728-4dc3-9daf-2cca08405500\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.860945 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-config-data\") pod \"5a8dae5e-9728-4dc3-9daf-2cca08405500\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.861087 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-combined-ca-bundle\") pod \"5a8dae5e-9728-4dc3-9daf-2cca08405500\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.861132 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a8dae5e-9728-4dc3-9daf-2cca08405500-logs\") pod \"5a8dae5e-9728-4dc3-9daf-2cca08405500\" (UID: \"5a8dae5e-9728-4dc3-9daf-2cca08405500\") " Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.862573 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a8dae5e-9728-4dc3-9daf-2cca08405500-logs" (OuterVolumeSpecName: "logs") pod "5a8dae5e-9728-4dc3-9daf-2cca08405500" (UID: "5a8dae5e-9728-4dc3-9daf-2cca08405500"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.877855 4768 scope.go:117] "RemoveContainer" containerID="613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.886084 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a8dae5e-9728-4dc3-9daf-2cca08405500-kube-api-access-gkwh6" (OuterVolumeSpecName: "kube-api-access-gkwh6") pod "5a8dae5e-9728-4dc3-9daf-2cca08405500" (UID: "5a8dae5e-9728-4dc3-9daf-2cca08405500"). InnerVolumeSpecName "kube-api-access-gkwh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.934341 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5a8dae5e-9728-4dc3-9daf-2cca08405500" (UID: "5a8dae5e-9728-4dc3-9daf-2cca08405500"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.946550 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-config-data" (OuterVolumeSpecName: "config-data") pod "5a8dae5e-9728-4dc3-9daf-2cca08405500" (UID: "5a8dae5e-9728-4dc3-9daf-2cca08405500"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.964208 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a8dae5e-9728-4dc3-9daf-2cca08405500-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.964239 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkwh6\" (UniqueName: \"kubernetes.io/projected/5a8dae5e-9728-4dc3-9daf-2cca08405500-kube-api-access-gkwh6\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.964274 4768 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.964286 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.971961 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.973291 4768 scope.go:117] "RemoveContainer" containerID="7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f" Feb 23 18:52:31 crc kubenswrapper[4768]: E0223 18:52:31.973645 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f\": container with ID starting with 7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f not found: ID does not exist" containerID="7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.973678 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f"} err="failed to get container status \"7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f\": rpc error: code = NotFound desc = could not find container \"7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f\": container with ID starting with 7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f not found: ID does not exist" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.973699 4768 scope.go:117] "RemoveContainer" containerID="613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b" Feb 23 18:52:31 crc kubenswrapper[4768]: E0223 18:52:31.973881 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b\": container with ID starting with 613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b not found: ID does not exist" containerID="613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.973902 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b"} err="failed to get container status \"613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b\": rpc error: code = NotFound desc = could not find container \"613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b\": container with ID starting with 613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b not found: ID does not exist" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.973913 4768 scope.go:117] "RemoveContainer" containerID="7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.974074 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f"} err="failed to get container status \"7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f\": rpc error: code = NotFound desc = could not find container \"7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f\": container with ID starting with 7556a267e05edaf91294c90b344a027be3d88e0effe4e640c1029ed5febfd67f not found: ID does not exist" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.974092 4768 scope.go:117] "RemoveContainer" containerID="613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.974240 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b"} err="failed to get container status \"613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b\": rpc error: code = NotFound desc = could not find container \"613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b\": container with ID starting with 613b87342415c071c0d845fa877523ceb8a1eaef4869a3b31051d6992e64b49b not found: ID does not exist" Feb 23 18:52:31 crc kubenswrapper[4768]: I0223 18:52:31.974300 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a8dae5e-9728-4dc3-9daf-2cca08405500" (UID: "5a8dae5e-9728-4dc3-9daf-2cca08405500"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.065683 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a8dae5e-9728-4dc3-9daf-2cca08405500-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.877380 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"15bf982c-902c-45c7-9620-095ec38e9b86","Type":"ContainerStarted","Data":"7fd7b86579c7c8e81965c8ddcde41c9c5f11e5d2aa6b8685b9a8a6976d760293"} Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.877908 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"15bf982c-902c-45c7-9620-095ec38e9b86","Type":"ContainerStarted","Data":"26e576754e86b950795394c344ee56cf984f231d3402bee6fb5feed05d40af2c"} Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.878519 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.881962 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.901094 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.9010711799999997 podStartE2EDuration="2.90107118s" podCreationTimestamp="2026-02-23 18:52:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:52:32.898515431 +0000 UTC m=+1148.289001251" watchObservedRunningTime="2026-02-23 18:52:32.90107118 +0000 UTC m=+1148.291557000" Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.929791 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.941718 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.959761 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:32 crc kubenswrapper[4768]: E0223 18:52:32.961280 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a8dae5e-9728-4dc3-9daf-2cca08405500" containerName="nova-metadata-metadata" Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.961304 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a8dae5e-9728-4dc3-9daf-2cca08405500" containerName="nova-metadata-metadata" Feb 23 18:52:32 crc kubenswrapper[4768]: E0223 18:52:32.961323 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a8dae5e-9728-4dc3-9daf-2cca08405500" containerName="nova-metadata-log" Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.961331 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a8dae5e-9728-4dc3-9daf-2cca08405500" containerName="nova-metadata-log" Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.961585 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a8dae5e-9728-4dc3-9daf-2cca08405500" containerName="nova-metadata-log" Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.961609 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a8dae5e-9728-4dc3-9daf-2cca08405500" containerName="nova-metadata-metadata" Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.965157 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.970672 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.970852 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 18:52:32 crc kubenswrapper[4768]: I0223 18:52:32.984087 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.083977 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlcf9\" (UniqueName: \"kubernetes.io/projected/50c2bf43-618c-44de-8b37-d017a5cc896a-kube-api-access-wlcf9\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.084049 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-config-data\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.084097 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c2bf43-618c-44de-8b37-d017a5cc896a-logs\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.084151 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.084183 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.185943 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.186006 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.186070 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlcf9\" (UniqueName: \"kubernetes.io/projected/50c2bf43-618c-44de-8b37-d017a5cc896a-kube-api-access-wlcf9\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.186101 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-config-data\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.186145 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c2bf43-618c-44de-8b37-d017a5cc896a-logs\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.186608 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c2bf43-618c-44de-8b37-d017a5cc896a-logs\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.194755 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.194876 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-config-data\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.202787 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.205803 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlcf9\" (UniqueName: \"kubernetes.io/projected/50c2bf43-618c-44de-8b37-d017a5cc896a-kube-api-access-wlcf9\") pod \"nova-metadata-0\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.299190 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.323580 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a8dae5e-9728-4dc3-9daf-2cca08405500" path="/var/lib/kubelet/pods/5a8dae5e-9728-4dc3-9daf-2cca08405500/volumes" Feb 23 18:52:33 crc kubenswrapper[4768]: I0223 18:52:33.896552 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:52:34 crc kubenswrapper[4768]: E0223 18:52:34.194240 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 18:52:34 crc kubenswrapper[4768]: E0223 18:52:34.197646 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 18:52:34 crc kubenswrapper[4768]: E0223 18:52:34.200154 4768 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 23 18:52:34 crc kubenswrapper[4768]: E0223 18:52:34.200203 4768 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="51a8265d-41d6-4bdf-a3e8-cb8ade072b45" containerName="nova-scheduler-scheduler" Feb 23 18:52:34 crc kubenswrapper[4768]: I0223 18:52:34.914529 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50c2bf43-618c-44de-8b37-d017a5cc896a","Type":"ContainerStarted","Data":"1ece9b7eb4b2a940f2e13ff9e89c425d41664016486b2e8892bf63eeb9fca414"} Feb 23 18:52:34 crc kubenswrapper[4768]: I0223 18:52:34.914923 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50c2bf43-618c-44de-8b37-d017a5cc896a","Type":"ContainerStarted","Data":"155186b341f19d0b80d6fe7f4399f5a972384026596dd83f9f0dc6a8f4585654"} Feb 23 18:52:34 crc kubenswrapper[4768]: I0223 18:52:34.914937 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50c2bf43-618c-44de-8b37-d017a5cc896a","Type":"ContainerStarted","Data":"421df262b3c66d2fdd33f6e65ae40c508d5cb659b739cf8bb33eaa3d3fe7e8cc"} Feb 23 18:52:34 crc kubenswrapper[4768]: I0223 18:52:34.944154 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.944116708 podStartE2EDuration="2.944116708s" podCreationTimestamp="2026-02-23 18:52:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:52:34.938869885 +0000 UTC m=+1150.329355735" watchObservedRunningTime="2026-02-23 18:52:34.944116708 +0000 UTC m=+1150.334602548" Feb 23 18:52:35 crc kubenswrapper[4768]: I0223 18:52:35.822293 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 18:52:35 crc kubenswrapper[4768]: I0223 18:52:35.929085 4768 generic.go:334] "Generic (PLEG): container finished" podID="51a8265d-41d6-4bdf-a3e8-cb8ade072b45" containerID="bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8" exitCode=0 Feb 23 18:52:35 crc kubenswrapper[4768]: I0223 18:52:35.930295 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 18:52:35 crc kubenswrapper[4768]: I0223 18:52:35.930395 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"51a8265d-41d6-4bdf-a3e8-cb8ade072b45","Type":"ContainerDied","Data":"bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8"} Feb 23 18:52:35 crc kubenswrapper[4768]: I0223 18:52:35.930434 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"51a8265d-41d6-4bdf-a3e8-cb8ade072b45","Type":"ContainerDied","Data":"83b33e4798afc4dac9e08dcd5df9caa5d1451bdf7daf635fcf669ee790298834"} Feb 23 18:52:35 crc kubenswrapper[4768]: I0223 18:52:35.930460 4768 scope.go:117] "RemoveContainer" containerID="bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8" Feb 23 18:52:35 crc kubenswrapper[4768]: I0223 18:52:35.961307 4768 scope.go:117] "RemoveContainer" containerID="bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8" Feb 23 18:52:35 crc kubenswrapper[4768]: E0223 18:52:35.962372 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8\": container with ID starting with bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8 not found: ID does not exist" containerID="bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8" Feb 23 18:52:35 crc kubenswrapper[4768]: I0223 18:52:35.962405 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8"} err="failed to get container status \"bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8\": rpc error: code = NotFound desc = could not find container \"bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8\": container with ID starting with bac0fbd14209952a3b42ec14a5035528e4d196a47b5a5ac992f33bc9bf882ef8 not found: ID does not exist" Feb 23 18:52:35 crc kubenswrapper[4768]: I0223 18:52:35.966345 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-config-data\") pod \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\" (UID: \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\") " Feb 23 18:52:35 crc kubenswrapper[4768]: I0223 18:52:35.966552 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-combined-ca-bundle\") pod \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\" (UID: \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\") " Feb 23 18:52:35 crc kubenswrapper[4768]: I0223 18:52:35.966709 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9qks\" (UniqueName: \"kubernetes.io/projected/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-kube-api-access-w9qks\") pod \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\" (UID: \"51a8265d-41d6-4bdf-a3e8-cb8ade072b45\") " Feb 23 18:52:35 crc kubenswrapper[4768]: I0223 18:52:35.973945 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-kube-api-access-w9qks" (OuterVolumeSpecName: "kube-api-access-w9qks") pod "51a8265d-41d6-4bdf-a3e8-cb8ade072b45" (UID: "51a8265d-41d6-4bdf-a3e8-cb8ade072b45"). InnerVolumeSpecName "kube-api-access-w9qks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:52:35 crc kubenswrapper[4768]: I0223 18:52:35.998500 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-config-data" (OuterVolumeSpecName: "config-data") pod "51a8265d-41d6-4bdf-a3e8-cb8ade072b45" (UID: "51a8265d-41d6-4bdf-a3e8-cb8ade072b45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.004414 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51a8265d-41d6-4bdf-a3e8-cb8ade072b45" (UID: "51a8265d-41d6-4bdf-a3e8-cb8ade072b45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.069133 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.069169 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.069186 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9qks\" (UniqueName: \"kubernetes.io/projected/51a8265d-41d6-4bdf-a3e8-cb8ade072b45-kube-api-access-w9qks\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.267174 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.278638 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.299835 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:52:36 crc kubenswrapper[4768]: E0223 18:52:36.300465 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51a8265d-41d6-4bdf-a3e8-cb8ade072b45" containerName="nova-scheduler-scheduler" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.300484 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="51a8265d-41d6-4bdf-a3e8-cb8ade072b45" containerName="nova-scheduler-scheduler" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.300713 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="51a8265d-41d6-4bdf-a3e8-cb8ade072b45" containerName="nova-scheduler-scheduler" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.302163 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.305498 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.316193 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.374835 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9ntf\" (UniqueName: \"kubernetes.io/projected/59205d38-cfa3-4689-b3df-087dbf419370-kube-api-access-g9ntf\") pod \"nova-scheduler-0\" (UID: \"59205d38-cfa3-4689-b3df-087dbf419370\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.375039 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59205d38-cfa3-4689-b3df-087dbf419370-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59205d38-cfa3-4689-b3df-087dbf419370\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.375092 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59205d38-cfa3-4689-b3df-087dbf419370-config-data\") pod \"nova-scheduler-0\" (UID: \"59205d38-cfa3-4689-b3df-087dbf419370\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.477793 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9ntf\" (UniqueName: \"kubernetes.io/projected/59205d38-cfa3-4689-b3df-087dbf419370-kube-api-access-g9ntf\") pod \"nova-scheduler-0\" (UID: \"59205d38-cfa3-4689-b3df-087dbf419370\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.478007 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59205d38-cfa3-4689-b3df-087dbf419370-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59205d38-cfa3-4689-b3df-087dbf419370\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.478053 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59205d38-cfa3-4689-b3df-087dbf419370-config-data\") pod \"nova-scheduler-0\" (UID: \"59205d38-cfa3-4689-b3df-087dbf419370\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.484735 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59205d38-cfa3-4689-b3df-087dbf419370-config-data\") pod \"nova-scheduler-0\" (UID: \"59205d38-cfa3-4689-b3df-087dbf419370\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.493193 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59205d38-cfa3-4689-b3df-087dbf419370-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"59205d38-cfa3-4689-b3df-087dbf419370\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.502758 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9ntf\" (UniqueName: \"kubernetes.io/projected/59205d38-cfa3-4689-b3df-087dbf419370-kube-api-access-g9ntf\") pod \"nova-scheduler-0\" (UID: \"59205d38-cfa3-4689-b3df-087dbf419370\") " pod="openstack/nova-scheduler-0" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.626927 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.941569 4768 generic.go:334] "Generic (PLEG): container finished" podID="9dda515a-61a3-46ba-8946-d849a955aa0a" containerID="19d6847ac07978f03d5b4a9de21b2e7bd0f2f24b69c43291e4ad86142f01cc25" exitCode=0 Feb 23 18:52:36 crc kubenswrapper[4768]: I0223 18:52:36.941908 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dda515a-61a3-46ba-8946-d849a955aa0a","Type":"ContainerDied","Data":"19d6847ac07978f03d5b4a9de21b2e7bd0f2f24b69c43291e4ad86142f01cc25"} Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.073894 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.194419 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dda515a-61a3-46ba-8946-d849a955aa0a-logs\") pod \"9dda515a-61a3-46ba-8946-d849a955aa0a\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.194850 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dda515a-61a3-46ba-8946-d849a955aa0a-config-data\") pod \"9dda515a-61a3-46ba-8946-d849a955aa0a\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.194872 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dda515a-61a3-46ba-8946-d849a955aa0a-logs" (OuterVolumeSpecName: "logs") pod "9dda515a-61a3-46ba-8946-d849a955aa0a" (UID: "9dda515a-61a3-46ba-8946-d849a955aa0a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.195016 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lnjv\" (UniqueName: \"kubernetes.io/projected/9dda515a-61a3-46ba-8946-d849a955aa0a-kube-api-access-7lnjv\") pod \"9dda515a-61a3-46ba-8946-d849a955aa0a\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.195052 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dda515a-61a3-46ba-8946-d849a955aa0a-combined-ca-bundle\") pod \"9dda515a-61a3-46ba-8946-d849a955aa0a\" (UID: \"9dda515a-61a3-46ba-8946-d849a955aa0a\") " Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.195540 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dda515a-61a3-46ba-8946-d849a955aa0a-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.202053 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dda515a-61a3-46ba-8946-d849a955aa0a-kube-api-access-7lnjv" (OuterVolumeSpecName: "kube-api-access-7lnjv") pod "9dda515a-61a3-46ba-8946-d849a955aa0a" (UID: "9dda515a-61a3-46ba-8946-d849a955aa0a"). InnerVolumeSpecName "kube-api-access-7lnjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.213197 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:52:37 crc kubenswrapper[4768]: W0223 18:52:37.216383 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59205d38_cfa3_4689_b3df_087dbf419370.slice/crio-ba5f2dcb415fed1f932c45e413e2ece0c11798147b1752e2b1dba8a5ee39fe0d WatchSource:0}: Error finding container ba5f2dcb415fed1f932c45e413e2ece0c11798147b1752e2b1dba8a5ee39fe0d: Status 404 returned error can't find the container with id ba5f2dcb415fed1f932c45e413e2ece0c11798147b1752e2b1dba8a5ee39fe0d Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.224371 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dda515a-61a3-46ba-8946-d849a955aa0a-config-data" (OuterVolumeSpecName: "config-data") pod "9dda515a-61a3-46ba-8946-d849a955aa0a" (UID: "9dda515a-61a3-46ba-8946-d849a955aa0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.224900 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dda515a-61a3-46ba-8946-d849a955aa0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dda515a-61a3-46ba-8946-d849a955aa0a" (UID: "9dda515a-61a3-46ba-8946-d849a955aa0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.298703 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dda515a-61a3-46ba-8946-d849a955aa0a-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.299107 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lnjv\" (UniqueName: \"kubernetes.io/projected/9dda515a-61a3-46ba-8946-d849a955aa0a-kube-api-access-7lnjv\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.299129 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dda515a-61a3-46ba-8946-d849a955aa0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.325654 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51a8265d-41d6-4bdf-a3e8-cb8ade072b45" path="/var/lib/kubelet/pods/51a8265d-41d6-4bdf-a3e8-cb8ade072b45/volumes" Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.957234 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dda515a-61a3-46ba-8946-d849a955aa0a","Type":"ContainerDied","Data":"a5ee5fe1c042f8d0e3b5bc4eb2c0e1e9950936814fc9311dca130e53a33e8c6b"} Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.957337 4768 scope.go:117] "RemoveContainer" containerID="19d6847ac07978f03d5b4a9de21b2e7bd0f2f24b69c43291e4ad86142f01cc25" Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.957564 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.963475 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59205d38-cfa3-4689-b3df-087dbf419370","Type":"ContainerStarted","Data":"fc09d72f93432c0e8bea60ec5e9a60addb173db2a97ae49c1eeed13e3d3a469a"} Feb 23 18:52:37 crc kubenswrapper[4768]: I0223 18:52:37.963541 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59205d38-cfa3-4689-b3df-087dbf419370","Type":"ContainerStarted","Data":"ba5f2dcb415fed1f932c45e413e2ece0c11798147b1752e2b1dba8a5ee39fe0d"} Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.013794 4768 scope.go:117] "RemoveContainer" containerID="88cc5b043d52aa4014786482117e2894b96d75265c4c4aa6796778359f163fd1" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.017710 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.017690421 podStartE2EDuration="2.017690421s" podCreationTimestamp="2026-02-23 18:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:52:37.98731758 +0000 UTC m=+1153.377803410" watchObservedRunningTime="2026-02-23 18:52:38.017690421 +0000 UTC m=+1153.408176221" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.043167 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.060021 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.069011 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 18:52:38 crc kubenswrapper[4768]: E0223 18:52:38.069606 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dda515a-61a3-46ba-8946-d849a955aa0a" containerName="nova-api-api" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.069633 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dda515a-61a3-46ba-8946-d849a955aa0a" containerName="nova-api-api" Feb 23 18:52:38 crc kubenswrapper[4768]: E0223 18:52:38.069660 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dda515a-61a3-46ba-8946-d849a955aa0a" containerName="nova-api-log" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.069671 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dda515a-61a3-46ba-8946-d849a955aa0a" containerName="nova-api-log" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.069911 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dda515a-61a3-46ba-8946-d849a955aa0a" containerName="nova-api-log" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.069935 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dda515a-61a3-46ba-8946-d849a955aa0a" containerName="nova-api-api" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.071302 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.073693 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.082388 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.217338 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.217430 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssmjv\" (UniqueName: \"kubernetes.io/projected/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-kube-api-access-ssmjv\") pod \"nova-api-0\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.217512 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-logs\") pod \"nova-api-0\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.217565 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-config-data\") pod \"nova-api-0\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.299606 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.301325 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.319069 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-config-data\") pod \"nova-api-0\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.319203 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.319237 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssmjv\" (UniqueName: \"kubernetes.io/projected/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-kube-api-access-ssmjv\") pod \"nova-api-0\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.319318 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-logs\") pod \"nova-api-0\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.319991 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-logs\") pod \"nova-api-0\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.327940 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.329066 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-config-data\") pod \"nova-api-0\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.353680 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssmjv\" (UniqueName: \"kubernetes.io/projected/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-kube-api-access-ssmjv\") pod \"nova-api-0\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.392073 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:52:38 crc kubenswrapper[4768]: I0223 18:52:38.895229 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:52:39 crc kubenswrapper[4768]: I0223 18:52:39.000337 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e1cfc59-e792-45d5-a595-4c4329a4a4b6","Type":"ContainerStarted","Data":"1592e25f75f3f5cd269935e47a86bafad6fcfb7c13507c597d91db11ce41a6a5"} Feb 23 18:52:39 crc kubenswrapper[4768]: I0223 18:52:39.326880 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dda515a-61a3-46ba-8946-d849a955aa0a" path="/var/lib/kubelet/pods/9dda515a-61a3-46ba-8946-d849a955aa0a/volumes" Feb 23 18:52:39 crc kubenswrapper[4768]: I0223 18:52:39.545664 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:52:39 crc kubenswrapper[4768]: I0223 18:52:39.545753 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:52:40 crc kubenswrapper[4768]: I0223 18:52:40.013491 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e1cfc59-e792-45d5-a595-4c4329a4a4b6","Type":"ContainerStarted","Data":"218ba0dafae9c66ef73ef494ab1ea672a5c04dfe0553e9bfad4a0ab14e32d827"} Feb 23 18:52:40 crc kubenswrapper[4768]: I0223 18:52:40.014005 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e1cfc59-e792-45d5-a595-4c4329a4a4b6","Type":"ContainerStarted","Data":"2451098a98e5f031e62abd096cc91cd659c10a1655552ee4dc89af815af7710b"} Feb 23 18:52:40 crc kubenswrapper[4768]: I0223 18:52:40.045566 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.045536853 podStartE2EDuration="2.045536853s" podCreationTimestamp="2026-02-23 18:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:52:40.040495005 +0000 UTC m=+1155.430980805" watchObservedRunningTime="2026-02-23 18:52:40.045536853 +0000 UTC m=+1155.436022643" Feb 23 18:52:41 crc kubenswrapper[4768]: I0223 18:52:41.268159 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 23 18:52:41 crc kubenswrapper[4768]: I0223 18:52:41.627748 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 18:52:43 crc kubenswrapper[4768]: I0223 18:52:43.300463 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 18:52:43 crc kubenswrapper[4768]: I0223 18:52:43.302106 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 18:52:44 crc kubenswrapper[4768]: I0223 18:52:44.320471 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 18:52:44 crc kubenswrapper[4768]: I0223 18:52:44.320526 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 18:52:46 crc kubenswrapper[4768]: I0223 18:52:46.627490 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 18:52:46 crc kubenswrapper[4768]: I0223 18:52:46.665587 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 18:52:47 crc kubenswrapper[4768]: I0223 18:52:47.147110 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 18:52:48 crc kubenswrapper[4768]: I0223 18:52:48.393635 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 18:52:48 crc kubenswrapper[4768]: I0223 18:52:48.394117 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 18:52:49 crc kubenswrapper[4768]: I0223 18:52:49.476571 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8e1cfc59-e792-45d5-a595-4c4329a4a4b6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 18:52:49 crc kubenswrapper[4768]: I0223 18:52:49.477176 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8e1cfc59-e792-45d5-a595-4c4329a4a4b6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 18:52:53 crc kubenswrapper[4768]: I0223 18:52:53.306691 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 18:52:53 crc kubenswrapper[4768]: I0223 18:52:53.330985 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 18:52:53 crc kubenswrapper[4768]: I0223 18:52:53.331118 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 18:52:54 crc kubenswrapper[4768]: I0223 18:52:54.219121 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.074873 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.164976 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcw52\" (UniqueName: \"kubernetes.io/projected/c2d46346-7b49-48a8-995f-a9e01ac5185b-kube-api-access-wcw52\") pod \"c2d46346-7b49-48a8-995f-a9e01ac5185b\" (UID: \"c2d46346-7b49-48a8-995f-a9e01ac5185b\") " Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.165181 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d46346-7b49-48a8-995f-a9e01ac5185b-config-data\") pod \"c2d46346-7b49-48a8-995f-a9e01ac5185b\" (UID: \"c2d46346-7b49-48a8-995f-a9e01ac5185b\") " Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.165318 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d46346-7b49-48a8-995f-a9e01ac5185b-combined-ca-bundle\") pod \"c2d46346-7b49-48a8-995f-a9e01ac5185b\" (UID: \"c2d46346-7b49-48a8-995f-a9e01ac5185b\") " Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.173521 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d46346-7b49-48a8-995f-a9e01ac5185b-kube-api-access-wcw52" (OuterVolumeSpecName: "kube-api-access-wcw52") pod "c2d46346-7b49-48a8-995f-a9e01ac5185b" (UID: "c2d46346-7b49-48a8-995f-a9e01ac5185b"). InnerVolumeSpecName "kube-api-access-wcw52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.194023 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.195825 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d46346-7b49-48a8-995f-a9e01ac5185b-config-data" (OuterVolumeSpecName: "config-data") pod "c2d46346-7b49-48a8-995f-a9e01ac5185b" (UID: "c2d46346-7b49-48a8-995f-a9e01ac5185b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.198216 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d46346-7b49-48a8-995f-a9e01ac5185b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2d46346-7b49-48a8-995f-a9e01ac5185b" (UID: "c2d46346-7b49-48a8-995f-a9e01ac5185b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.234168 4768 generic.go:334] "Generic (PLEG): container finished" podID="c2d46346-7b49-48a8-995f-a9e01ac5185b" containerID="6cabbf3aa3c7b7e17141e8da2390d37c9bada0f18ad9a226a7df43a3d0354912" exitCode=137 Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.234333 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c2d46346-7b49-48a8-995f-a9e01ac5185b","Type":"ContainerDied","Data":"6cabbf3aa3c7b7e17141e8da2390d37c9bada0f18ad9a226a7df43a3d0354912"} Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.234394 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c2d46346-7b49-48a8-995f-a9e01ac5185b","Type":"ContainerDied","Data":"8b1cd89a17ac02d821171c817481f72599b8a45ba7d493cf2662e6aa5fa9ab6a"} Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.234419 4768 scope.go:117] "RemoveContainer" containerID="6cabbf3aa3c7b7e17141e8da2390d37c9bada0f18ad9a226a7df43a3d0354912" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.234594 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.274919 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d46346-7b49-48a8-995f-a9e01ac5185b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.275173 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcw52\" (UniqueName: \"kubernetes.io/projected/c2d46346-7b49-48a8-995f-a9e01ac5185b-kube-api-access-wcw52\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.275384 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d46346-7b49-48a8-995f-a9e01ac5185b-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.332154 4768 scope.go:117] "RemoveContainer" containerID="6cabbf3aa3c7b7e17141e8da2390d37c9bada0f18ad9a226a7df43a3d0354912" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.335458 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 18:52:55 crc kubenswrapper[4768]: E0223 18:52:55.335565 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cabbf3aa3c7b7e17141e8da2390d37c9bada0f18ad9a226a7df43a3d0354912\": container with ID starting with 6cabbf3aa3c7b7e17141e8da2390d37c9bada0f18ad9a226a7df43a3d0354912 not found: ID does not exist" containerID="6cabbf3aa3c7b7e17141e8da2390d37c9bada0f18ad9a226a7df43a3d0354912" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.335870 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cabbf3aa3c7b7e17141e8da2390d37c9bada0f18ad9a226a7df43a3d0354912"} err="failed to get container status \"6cabbf3aa3c7b7e17141e8da2390d37c9bada0f18ad9a226a7df43a3d0354912\": rpc error: code = NotFound desc = could not find container \"6cabbf3aa3c7b7e17141e8da2390d37c9bada0f18ad9a226a7df43a3d0354912\": container with ID starting with 6cabbf3aa3c7b7e17141e8da2390d37c9bada0f18ad9a226a7df43a3d0354912 not found: ID does not exist" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.341796 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.352074 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 18:52:55 crc kubenswrapper[4768]: E0223 18:52:55.352889 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d46346-7b49-48a8-995f-a9e01ac5185b" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.352916 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d46346-7b49-48a8-995f-a9e01ac5185b" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.353354 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d46346-7b49-48a8-995f-a9e01ac5185b" containerName="nova-cell1-novncproxy-novncproxy" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.354430 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.360451 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.360642 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.360760 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.362346 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.479852 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.479911 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.480129 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx6gb\" (UniqueName: \"kubernetes.io/projected/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-kube-api-access-qx6gb\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.480362 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.480641 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.583504 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx6gb\" (UniqueName: \"kubernetes.io/projected/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-kube-api-access-qx6gb\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.583596 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.583668 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.583761 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.583792 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.588878 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.589302 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.590015 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.590911 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.601107 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx6gb\" (UniqueName: \"kubernetes.io/projected/c6a6d01d-0bb4-43aa-85c6-699d47fd2711-kube-api-access-qx6gb\") pod \"nova-cell1-novncproxy-0\" (UID: \"c6a6d01d-0bb4-43aa-85c6-699d47fd2711\") " pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:55 crc kubenswrapper[4768]: I0223 18:52:55.672537 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:52:56 crc kubenswrapper[4768]: I0223 18:52:56.180469 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 23 18:52:56 crc kubenswrapper[4768]: I0223 18:52:56.249860 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c6a6d01d-0bb4-43aa-85c6-699d47fd2711","Type":"ContainerStarted","Data":"be0ee3d97380b0f378dd3fe23c10c3c522b7601371c3ac3886be1042b6762da3"} Feb 23 18:52:57 crc kubenswrapper[4768]: I0223 18:52:57.261733 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"c6a6d01d-0bb4-43aa-85c6-699d47fd2711","Type":"ContainerStarted","Data":"c4caa424b3de5e7d35eab854104e4b632bfa55789425a876c4975d20ab119997"} Feb 23 18:52:57 crc kubenswrapper[4768]: I0223 18:52:57.294866 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.294843308 podStartE2EDuration="2.294843308s" podCreationTimestamp="2026-02-23 18:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:52:57.291438385 +0000 UTC m=+1172.681924225" watchObservedRunningTime="2026-02-23 18:52:57.294843308 +0000 UTC m=+1172.685329138" Feb 23 18:52:57 crc kubenswrapper[4768]: I0223 18:52:57.343368 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d46346-7b49-48a8-995f-a9e01ac5185b" path="/var/lib/kubelet/pods/c2d46346-7b49-48a8-995f-a9e01ac5185b/volumes" Feb 23 18:52:58 crc kubenswrapper[4768]: I0223 18:52:58.403848 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 18:52:58 crc kubenswrapper[4768]: I0223 18:52:58.405052 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 18:52:58 crc kubenswrapper[4768]: I0223 18:52:58.405565 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 18:52:58 crc kubenswrapper[4768]: I0223 18:52:58.409392 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.046168 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.046635 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1db6c967-62ff-4db3-b37d-152bdb673d74" containerName="kube-state-metrics" containerID="cri-o://7171b990d311e9b89933d6c7670eacb3894fbdba5a14d235da85fc02ebbacbb0" gracePeriod=30 Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.316311 4768 generic.go:334] "Generic (PLEG): container finished" podID="1db6c967-62ff-4db3-b37d-152bdb673d74" containerID="7171b990d311e9b89933d6c7670eacb3894fbdba5a14d235da85fc02ebbacbb0" exitCode=2 Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.326078 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1db6c967-62ff-4db3-b37d-152bdb673d74","Type":"ContainerDied","Data":"7171b990d311e9b89933d6c7670eacb3894fbdba5a14d235da85fc02ebbacbb0"} Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.326137 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.326185 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.683341 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6mm5g"] Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.686687 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.733661 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6mm5g"] Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.745071 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.813519 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw8n8\" (UniqueName: \"kubernetes.io/projected/1db6c967-62ff-4db3-b37d-152bdb673d74-kube-api-access-jw8n8\") pod \"1db6c967-62ff-4db3-b37d-152bdb673d74\" (UID: \"1db6c967-62ff-4db3-b37d-152bdb673d74\") " Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.814150 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.814411 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.814609 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdcdh\" (UniqueName: \"kubernetes.io/projected/841d70ea-a129-448e-bf61-2e13c1b19a96-kube-api-access-vdcdh\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.814650 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-config\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.814669 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.814821 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.832630 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db6c967-62ff-4db3-b37d-152bdb673d74-kube-api-access-jw8n8" (OuterVolumeSpecName: "kube-api-access-jw8n8") pod "1db6c967-62ff-4db3-b37d-152bdb673d74" (UID: "1db6c967-62ff-4db3-b37d-152bdb673d74"). InnerVolumeSpecName "kube-api-access-jw8n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.916844 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.916938 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.916989 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.917040 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdcdh\" (UniqueName: \"kubernetes.io/projected/841d70ea-a129-448e-bf61-2e13c1b19a96-kube-api-access-vdcdh\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.917060 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-config\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.917075 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.917143 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw8n8\" (UniqueName: \"kubernetes.io/projected/1db6c967-62ff-4db3-b37d-152bdb673d74-kube-api-access-jw8n8\") on node \"crc\" DevicePath \"\"" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.918330 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.918450 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.918457 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.918508 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-config\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.918473 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:52:59 crc kubenswrapper[4768]: I0223 18:52:59.937134 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdcdh\" (UniqueName: \"kubernetes.io/projected/841d70ea-a129-448e-bf61-2e13c1b19a96-kube-api-access-vdcdh\") pod \"dnsmasq-dns-89c5cd4d5-6mm5g\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.049429 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.330442 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.331100 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1db6c967-62ff-4db3-b37d-152bdb673d74","Type":"ContainerDied","Data":"13369a0723a26c42c35b191b9d19b4aa4034e2efc0c33987534c274de5bb86d5"} Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.331146 4768 scope.go:117] "RemoveContainer" containerID="7171b990d311e9b89933d6c7670eacb3894fbdba5a14d235da85fc02ebbacbb0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.370311 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.384578 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.392387 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 18:53:00 crc kubenswrapper[4768]: E0223 18:53:00.392956 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1db6c967-62ff-4db3-b37d-152bdb673d74" containerName="kube-state-metrics" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.392971 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="1db6c967-62ff-4db3-b37d-152bdb673d74" containerName="kube-state-metrics" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.393213 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="1db6c967-62ff-4db3-b37d-152bdb673d74" containerName="kube-state-metrics" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.394172 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.398724 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.398970 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.411758 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.510084 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6mm5g"] Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.531006 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07b92be-5204-4ddb-97de-24984c997328-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e07b92be-5204-4ddb-97de-24984c997328\") " pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.531109 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07b92be-5204-4ddb-97de-24984c997328-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e07b92be-5204-4ddb-97de-24984c997328\") " pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.531152 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9shc\" (UniqueName: \"kubernetes.io/projected/e07b92be-5204-4ddb-97de-24984c997328-kube-api-access-b9shc\") pod \"kube-state-metrics-0\" (UID: \"e07b92be-5204-4ddb-97de-24984c997328\") " pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.531280 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e07b92be-5204-4ddb-97de-24984c997328-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e07b92be-5204-4ddb-97de-24984c997328\") " pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.632704 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e07b92be-5204-4ddb-97de-24984c997328-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e07b92be-5204-4ddb-97de-24984c997328\") " pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.633197 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07b92be-5204-4ddb-97de-24984c997328-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e07b92be-5204-4ddb-97de-24984c997328\") " pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.633269 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07b92be-5204-4ddb-97de-24984c997328-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e07b92be-5204-4ddb-97de-24984c997328\") " pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.633297 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9shc\" (UniqueName: \"kubernetes.io/projected/e07b92be-5204-4ddb-97de-24984c997328-kube-api-access-b9shc\") pod \"kube-state-metrics-0\" (UID: \"e07b92be-5204-4ddb-97de-24984c997328\") " pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.638869 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e07b92be-5204-4ddb-97de-24984c997328-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e07b92be-5204-4ddb-97de-24984c997328\") " pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.639922 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e07b92be-5204-4ddb-97de-24984c997328-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e07b92be-5204-4ddb-97de-24984c997328\") " pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.640102 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e07b92be-5204-4ddb-97de-24984c997328-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e07b92be-5204-4ddb-97de-24984c997328\") " pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.661233 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9shc\" (UniqueName: \"kubernetes.io/projected/e07b92be-5204-4ddb-97de-24984c997328-kube-api-access-b9shc\") pod \"kube-state-metrics-0\" (UID: \"e07b92be-5204-4ddb-97de-24984c997328\") " pod="openstack/kube-state-metrics-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.673635 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:53:00 crc kubenswrapper[4768]: I0223 18:53:00.717208 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 23 18:53:01 crc kubenswrapper[4768]: W0223 18:53:01.217546 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode07b92be_5204_4ddb_97de_24984c997328.slice/crio-f4e30db366027f0cb4c569121d6d64bc6d1a074269d3d84c43f008acc7a376dd WatchSource:0}: Error finding container f4e30db366027f0cb4c569121d6d64bc6d1a074269d3d84c43f008acc7a376dd: Status 404 returned error can't find the container with id f4e30db366027f0cb4c569121d6d64bc6d1a074269d3d84c43f008acc7a376dd Feb 23 18:53:01 crc kubenswrapper[4768]: I0223 18:53:01.220286 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 23 18:53:01 crc kubenswrapper[4768]: I0223 18:53:01.283142 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:53:01 crc kubenswrapper[4768]: I0223 18:53:01.283454 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="ceilometer-central-agent" containerID="cri-o://16d4136df64171e122b94bdb6e39577ac17ecfa3eea3a79b297e824f987d3feb" gracePeriod=30 Feb 23 18:53:01 crc kubenswrapper[4768]: I0223 18:53:01.283547 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="sg-core" containerID="cri-o://dbf86146514b711b77845dc670fc3d26cc0f1ddcbdd33ecd72804cb315215a9e" gracePeriod=30 Feb 23 18:53:01 crc kubenswrapper[4768]: I0223 18:53:01.283547 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="proxy-httpd" containerID="cri-o://bd184d47f4e40cc6044777ea6b14296e67f67e4626cf33d2c1dfef5cfbf70c7d" gracePeriod=30 Feb 23 18:53:01 crc kubenswrapper[4768]: I0223 18:53:01.283633 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="ceilometer-notification-agent" containerID="cri-o://b2ec3d22432b90222a08e9de2cdd5960ee4e57b50f744c14bfb71ff4f4b109ec" gracePeriod=30 Feb 23 18:53:01 crc kubenswrapper[4768]: I0223 18:53:01.319409 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db6c967-62ff-4db3-b37d-152bdb673d74" path="/var/lib/kubelet/pods/1db6c967-62ff-4db3-b37d-152bdb673d74/volumes" Feb 23 18:53:01 crc kubenswrapper[4768]: I0223 18:53:01.340722 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e07b92be-5204-4ddb-97de-24984c997328","Type":"ContainerStarted","Data":"f4e30db366027f0cb4c569121d6d64bc6d1a074269d3d84c43f008acc7a376dd"} Feb 23 18:53:01 crc kubenswrapper[4768]: I0223 18:53:01.342713 4768 generic.go:334] "Generic (PLEG): container finished" podID="841d70ea-a129-448e-bf61-2e13c1b19a96" containerID="2a77ea6cbb27eb651f637e3c17c626ce6456d44fc252c222b3890ad6b5cad60e" exitCode=0 Feb 23 18:53:01 crc kubenswrapper[4768]: I0223 18:53:01.342778 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" event={"ID":"841d70ea-a129-448e-bf61-2e13c1b19a96","Type":"ContainerDied","Data":"2a77ea6cbb27eb651f637e3c17c626ce6456d44fc252c222b3890ad6b5cad60e"} Feb 23 18:53:01 crc kubenswrapper[4768]: I0223 18:53:01.342795 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" event={"ID":"841d70ea-a129-448e-bf61-2e13c1b19a96","Type":"ContainerStarted","Data":"1dd2f723b2a1c3b37e978ee9dbdfcb9e287f386e797c64fbd6aee744e9ae2f53"} Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.241229 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.363778 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" event={"ID":"841d70ea-a129-448e-bf61-2e13c1b19a96","Type":"ContainerStarted","Data":"43cc8e6f4a80fe250c65ecc08ae8b76f36da949216b63c4563dbd007cb979888"} Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.364647 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.394862 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e07b92be-5204-4ddb-97de-24984c997328","Type":"ContainerStarted","Data":"ae5ef5edc496d407cd89fe38e029647d219842525c6f489963408b1b31912c9d"} Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.395537 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.408901 4768 generic.go:334] "Generic (PLEG): container finished" podID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerID="bd184d47f4e40cc6044777ea6b14296e67f67e4626cf33d2c1dfef5cfbf70c7d" exitCode=0 Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.416618 4768 generic.go:334] "Generic (PLEG): container finished" podID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerID="dbf86146514b711b77845dc670fc3d26cc0f1ddcbdd33ecd72804cb315215a9e" exitCode=2 Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.416653 4768 generic.go:334] "Generic (PLEG): container finished" podID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerID="16d4136df64171e122b94bdb6e39577ac17ecfa3eea3a79b297e824f987d3feb" exitCode=0 Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.417306 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8e1cfc59-e792-45d5-a595-4c4329a4a4b6" containerName="nova-api-log" containerID="cri-o://2451098a98e5f031e62abd096cc91cd659c10a1655552ee4dc89af815af7710b" gracePeriod=30 Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.412328 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84d2708c-15ee-4b5e-aaeb-03ad646f3d51","Type":"ContainerDied","Data":"bd184d47f4e40cc6044777ea6b14296e67f67e4626cf33d2c1dfef5cfbf70c7d"} Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.417504 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84d2708c-15ee-4b5e-aaeb-03ad646f3d51","Type":"ContainerDied","Data":"dbf86146514b711b77845dc670fc3d26cc0f1ddcbdd33ecd72804cb315215a9e"} Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.417529 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84d2708c-15ee-4b5e-aaeb-03ad646f3d51","Type":"ContainerDied","Data":"16d4136df64171e122b94bdb6e39577ac17ecfa3eea3a79b297e824f987d3feb"} Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.417651 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8e1cfc59-e792-45d5-a595-4c4329a4a4b6" containerName="nova-api-api" containerID="cri-o://218ba0dafae9c66ef73ef494ab1ea672a5c04dfe0553e9bfad4a0ab14e32d827" gracePeriod=30 Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.451692 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" podStartSLOduration=3.451658958 podStartE2EDuration="3.451658958s" podCreationTimestamp="2026-02-23 18:52:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:53:02.411024767 +0000 UTC m=+1177.801510567" watchObservedRunningTime="2026-02-23 18:53:02.451658958 +0000 UTC m=+1177.842144758" Feb 23 18:53:02 crc kubenswrapper[4768]: I0223 18:53:02.456051 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.834062182 podStartE2EDuration="2.456042688s" podCreationTimestamp="2026-02-23 18:53:00 +0000 UTC" firstStartedPulling="2026-02-23 18:53:01.220297112 +0000 UTC m=+1176.610782912" lastFinishedPulling="2026-02-23 18:53:01.842277618 +0000 UTC m=+1177.232763418" observedRunningTime="2026-02-23 18:53:02.437808969 +0000 UTC m=+1177.828294769" watchObservedRunningTime="2026-02-23 18:53:02.456042688 +0000 UTC m=+1177.846528488" Feb 23 18:53:03 crc kubenswrapper[4768]: I0223 18:53:03.427533 4768 generic.go:334] "Generic (PLEG): container finished" podID="8e1cfc59-e792-45d5-a595-4c4329a4a4b6" containerID="2451098a98e5f031e62abd096cc91cd659c10a1655552ee4dc89af815af7710b" exitCode=143 Feb 23 18:53:03 crc kubenswrapper[4768]: I0223 18:53:03.427632 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e1cfc59-e792-45d5-a595-4c4329a4a4b6","Type":"ContainerDied","Data":"2451098a98e5f031e62abd096cc91cd659c10a1655552ee4dc89af815af7710b"} Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.066312 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.153878 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-run-httpd\") pod \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.154110 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-scripts\") pod \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.154233 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-log-httpd\") pod \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.154329 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-sg-core-conf-yaml\") pod \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.154416 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-config-data\") pod \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.154627 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d42dw\" (UniqueName: \"kubernetes.io/projected/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-kube-api-access-d42dw\") pod \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.154659 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-combined-ca-bundle\") pod \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\" (UID: \"84d2708c-15ee-4b5e-aaeb-03ad646f3d51\") " Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.155606 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "84d2708c-15ee-4b5e-aaeb-03ad646f3d51" (UID: "84d2708c-15ee-4b5e-aaeb-03ad646f3d51"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.155977 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "84d2708c-15ee-4b5e-aaeb-03ad646f3d51" (UID: "84d2708c-15ee-4b5e-aaeb-03ad646f3d51"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.157863 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.165707 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-kube-api-access-d42dw" (OuterVolumeSpecName: "kube-api-access-d42dw") pod "84d2708c-15ee-4b5e-aaeb-03ad646f3d51" (UID: "84d2708c-15ee-4b5e-aaeb-03ad646f3d51"). InnerVolumeSpecName "kube-api-access-d42dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.167082 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-scripts" (OuterVolumeSpecName: "scripts") pod "84d2708c-15ee-4b5e-aaeb-03ad646f3d51" (UID: "84d2708c-15ee-4b5e-aaeb-03ad646f3d51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.199402 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "84d2708c-15ee-4b5e-aaeb-03ad646f3d51" (UID: "84d2708c-15ee-4b5e-aaeb-03ad646f3d51"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.256234 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84d2708c-15ee-4b5e-aaeb-03ad646f3d51" (UID: "84d2708c-15ee-4b5e-aaeb-03ad646f3d51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.259580 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d42dw\" (UniqueName: \"kubernetes.io/projected/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-kube-api-access-d42dw\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.259614 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.259623 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.259632 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.259640 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.277702 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-config-data" (OuterVolumeSpecName: "config-data") pod "84d2708c-15ee-4b5e-aaeb-03ad646f3d51" (UID: "84d2708c-15ee-4b5e-aaeb-03ad646f3d51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.361275 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84d2708c-15ee-4b5e-aaeb-03ad646f3d51-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.442707 4768 generic.go:334] "Generic (PLEG): container finished" podID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerID="b2ec3d22432b90222a08e9de2cdd5960ee4e57b50f744c14bfb71ff4f4b109ec" exitCode=0 Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.442766 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84d2708c-15ee-4b5e-aaeb-03ad646f3d51","Type":"ContainerDied","Data":"b2ec3d22432b90222a08e9de2cdd5960ee4e57b50f744c14bfb71ff4f4b109ec"} Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.442836 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84d2708c-15ee-4b5e-aaeb-03ad646f3d51","Type":"ContainerDied","Data":"9d5de070ce2fa8c399658c1ee7b346dd1231c5262d42923099037b7dcd27beba"} Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.442863 4768 scope.go:117] "RemoveContainer" containerID="bd184d47f4e40cc6044777ea6b14296e67f67e4626cf33d2c1dfef5cfbf70c7d" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.442866 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.477844 4768 scope.go:117] "RemoveContainer" containerID="dbf86146514b711b77845dc670fc3d26cc0f1ddcbdd33ecd72804cb315215a9e" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.490644 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.505989 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.515400 4768 scope.go:117] "RemoveContainer" containerID="b2ec3d22432b90222a08e9de2cdd5960ee4e57b50f744c14bfb71ff4f4b109ec" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.530310 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:53:04 crc kubenswrapper[4768]: E0223 18:53:04.530847 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="ceilometer-central-agent" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.530870 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="ceilometer-central-agent" Feb 23 18:53:04 crc kubenswrapper[4768]: E0223 18:53:04.530897 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="proxy-httpd" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.530906 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="proxy-httpd" Feb 23 18:53:04 crc kubenswrapper[4768]: E0223 18:53:04.530929 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="ceilometer-notification-agent" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.530938 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="ceilometer-notification-agent" Feb 23 18:53:04 crc kubenswrapper[4768]: E0223 18:53:04.530956 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="sg-core" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.530965 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="sg-core" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.531223 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="ceilometer-central-agent" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.531380 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="proxy-httpd" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.531400 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="sg-core" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.531422 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" containerName="ceilometer-notification-agent" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.533808 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.537040 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.537242 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.537591 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.539386 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.553852 4768 scope.go:117] "RemoveContainer" containerID="16d4136df64171e122b94bdb6e39577ac17ecfa3eea3a79b297e824f987d3feb" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.601952 4768 scope.go:117] "RemoveContainer" containerID="bd184d47f4e40cc6044777ea6b14296e67f67e4626cf33d2c1dfef5cfbf70c7d" Feb 23 18:53:04 crc kubenswrapper[4768]: E0223 18:53:04.604763 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd184d47f4e40cc6044777ea6b14296e67f67e4626cf33d2c1dfef5cfbf70c7d\": container with ID starting with bd184d47f4e40cc6044777ea6b14296e67f67e4626cf33d2c1dfef5cfbf70c7d not found: ID does not exist" containerID="bd184d47f4e40cc6044777ea6b14296e67f67e4626cf33d2c1dfef5cfbf70c7d" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.604800 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd184d47f4e40cc6044777ea6b14296e67f67e4626cf33d2c1dfef5cfbf70c7d"} err="failed to get container status \"bd184d47f4e40cc6044777ea6b14296e67f67e4626cf33d2c1dfef5cfbf70c7d\": rpc error: code = NotFound desc = could not find container \"bd184d47f4e40cc6044777ea6b14296e67f67e4626cf33d2c1dfef5cfbf70c7d\": container with ID starting with bd184d47f4e40cc6044777ea6b14296e67f67e4626cf33d2c1dfef5cfbf70c7d not found: ID does not exist" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.604826 4768 scope.go:117] "RemoveContainer" containerID="dbf86146514b711b77845dc670fc3d26cc0f1ddcbdd33ecd72804cb315215a9e" Feb 23 18:53:04 crc kubenswrapper[4768]: E0223 18:53:04.605340 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbf86146514b711b77845dc670fc3d26cc0f1ddcbdd33ecd72804cb315215a9e\": container with ID starting with dbf86146514b711b77845dc670fc3d26cc0f1ddcbdd33ecd72804cb315215a9e not found: ID does not exist" containerID="dbf86146514b711b77845dc670fc3d26cc0f1ddcbdd33ecd72804cb315215a9e" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.605363 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbf86146514b711b77845dc670fc3d26cc0f1ddcbdd33ecd72804cb315215a9e"} err="failed to get container status \"dbf86146514b711b77845dc670fc3d26cc0f1ddcbdd33ecd72804cb315215a9e\": rpc error: code = NotFound desc = could not find container \"dbf86146514b711b77845dc670fc3d26cc0f1ddcbdd33ecd72804cb315215a9e\": container with ID starting with dbf86146514b711b77845dc670fc3d26cc0f1ddcbdd33ecd72804cb315215a9e not found: ID does not exist" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.605376 4768 scope.go:117] "RemoveContainer" containerID="b2ec3d22432b90222a08e9de2cdd5960ee4e57b50f744c14bfb71ff4f4b109ec" Feb 23 18:53:04 crc kubenswrapper[4768]: E0223 18:53:04.605974 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2ec3d22432b90222a08e9de2cdd5960ee4e57b50f744c14bfb71ff4f4b109ec\": container with ID starting with b2ec3d22432b90222a08e9de2cdd5960ee4e57b50f744c14bfb71ff4f4b109ec not found: ID does not exist" containerID="b2ec3d22432b90222a08e9de2cdd5960ee4e57b50f744c14bfb71ff4f4b109ec" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.606049 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2ec3d22432b90222a08e9de2cdd5960ee4e57b50f744c14bfb71ff4f4b109ec"} err="failed to get container status \"b2ec3d22432b90222a08e9de2cdd5960ee4e57b50f744c14bfb71ff4f4b109ec\": rpc error: code = NotFound desc = could not find container \"b2ec3d22432b90222a08e9de2cdd5960ee4e57b50f744c14bfb71ff4f4b109ec\": container with ID starting with b2ec3d22432b90222a08e9de2cdd5960ee4e57b50f744c14bfb71ff4f4b109ec not found: ID does not exist" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.606490 4768 scope.go:117] "RemoveContainer" containerID="16d4136df64171e122b94bdb6e39577ac17ecfa3eea3a79b297e824f987d3feb" Feb 23 18:53:04 crc kubenswrapper[4768]: E0223 18:53:04.607772 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16d4136df64171e122b94bdb6e39577ac17ecfa3eea3a79b297e824f987d3feb\": container with ID starting with 16d4136df64171e122b94bdb6e39577ac17ecfa3eea3a79b297e824f987d3feb not found: ID does not exist" containerID="16d4136df64171e122b94bdb6e39577ac17ecfa3eea3a79b297e824f987d3feb" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.607839 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16d4136df64171e122b94bdb6e39577ac17ecfa3eea3a79b297e824f987d3feb"} err="failed to get container status \"16d4136df64171e122b94bdb6e39577ac17ecfa3eea3a79b297e824f987d3feb\": rpc error: code = NotFound desc = could not find container \"16d4136df64171e122b94bdb6e39577ac17ecfa3eea3a79b297e824f987d3feb\": container with ID starting with 16d4136df64171e122b94bdb6e39577ac17ecfa3eea3a79b297e824f987d3feb not found: ID does not exist" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.668618 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f21626c4-fa00-4c3f-816f-fb1b27274150-log-httpd\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.668724 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-config-data\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.668762 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.668849 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.669195 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25cxv\" (UniqueName: \"kubernetes.io/projected/f21626c4-fa00-4c3f-816f-fb1b27274150-kube-api-access-25cxv\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.669301 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-scripts\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.669395 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f21626c4-fa00-4c3f-816f-fb1b27274150-run-httpd\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.669462 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.771347 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25cxv\" (UniqueName: \"kubernetes.io/projected/f21626c4-fa00-4c3f-816f-fb1b27274150-kube-api-access-25cxv\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.771393 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-scripts\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.771426 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f21626c4-fa00-4c3f-816f-fb1b27274150-run-httpd\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.771456 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.771522 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f21626c4-fa00-4c3f-816f-fb1b27274150-log-httpd\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.771542 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-config-data\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.771558 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.771593 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.772380 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f21626c4-fa00-4c3f-816f-fb1b27274150-log-httpd\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.772794 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f21626c4-fa00-4c3f-816f-fb1b27274150-run-httpd\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.777171 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.777238 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-scripts\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.778038 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.778532 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.778728 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-config-data\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.795159 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25cxv\" (UniqueName: \"kubernetes.io/projected/f21626c4-fa00-4c3f-816f-fb1b27274150-kube-api-access-25cxv\") pod \"ceilometer-0\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " pod="openstack/ceilometer-0" Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.822431 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:53:04 crc kubenswrapper[4768]: I0223 18:53:04.823035 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:53:05 crc kubenswrapper[4768]: I0223 18:53:05.327452 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d2708c-15ee-4b5e-aaeb-03ad646f3d51" path="/var/lib/kubelet/pods/84d2708c-15ee-4b5e-aaeb-03ad646f3d51/volumes" Feb 23 18:53:05 crc kubenswrapper[4768]: I0223 18:53:05.333651 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:53:05 crc kubenswrapper[4768]: I0223 18:53:05.463095 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f21626c4-fa00-4c3f-816f-fb1b27274150","Type":"ContainerStarted","Data":"10ae469e020afbe7a321620280fb1c4e15adce1165127528c10fb536952691b5"} Feb 23 18:53:05 crc kubenswrapper[4768]: I0223 18:53:05.673848 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:53:05 crc kubenswrapper[4768]: I0223 18:53:05.705638 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.102151 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.200434 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssmjv\" (UniqueName: \"kubernetes.io/projected/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-kube-api-access-ssmjv\") pod \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.200570 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-combined-ca-bundle\") pod \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.200714 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-config-data\") pod \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.200746 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-logs\") pod \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\" (UID: \"8e1cfc59-e792-45d5-a595-4c4329a4a4b6\") " Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.201502 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-logs" (OuterVolumeSpecName: "logs") pod "8e1cfc59-e792-45d5-a595-4c4329a4a4b6" (UID: "8e1cfc59-e792-45d5-a595-4c4329a4a4b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.212013 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-kube-api-access-ssmjv" (OuterVolumeSpecName: "kube-api-access-ssmjv") pod "8e1cfc59-e792-45d5-a595-4c4329a4a4b6" (UID: "8e1cfc59-e792-45d5-a595-4c4329a4a4b6"). InnerVolumeSpecName "kube-api-access-ssmjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.237036 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e1cfc59-e792-45d5-a595-4c4329a4a4b6" (UID: "8e1cfc59-e792-45d5-a595-4c4329a4a4b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.249894 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-config-data" (OuterVolumeSpecName: "config-data") pod "8e1cfc59-e792-45d5-a595-4c4329a4a4b6" (UID: "8e1cfc59-e792-45d5-a595-4c4329a4a4b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.303287 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.303320 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.303330 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.303340 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssmjv\" (UniqueName: \"kubernetes.io/projected/8e1cfc59-e792-45d5-a595-4c4329a4a4b6-kube-api-access-ssmjv\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.478767 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f21626c4-fa00-4c3f-816f-fb1b27274150","Type":"ContainerStarted","Data":"9a02329cf4330379c24f2953dd8142a088b8e510800674af4a32fbe1ea54c7cc"} Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.481037 4768 generic.go:334] "Generic (PLEG): container finished" podID="8e1cfc59-e792-45d5-a595-4c4329a4a4b6" containerID="218ba0dafae9c66ef73ef494ab1ea672a5c04dfe0553e9bfad4a0ab14e32d827" exitCode=0 Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.481147 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.481169 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e1cfc59-e792-45d5-a595-4c4329a4a4b6","Type":"ContainerDied","Data":"218ba0dafae9c66ef73ef494ab1ea672a5c04dfe0553e9bfad4a0ab14e32d827"} Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.481316 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8e1cfc59-e792-45d5-a595-4c4329a4a4b6","Type":"ContainerDied","Data":"1592e25f75f3f5cd269935e47a86bafad6fcfb7c13507c597d91db11ce41a6a5"} Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.481398 4768 scope.go:117] "RemoveContainer" containerID="218ba0dafae9c66ef73ef494ab1ea672a5c04dfe0553e9bfad4a0ab14e32d827" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.524748 4768 scope.go:117] "RemoveContainer" containerID="2451098a98e5f031e62abd096cc91cd659c10a1655552ee4dc89af815af7710b" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.524908 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.527752 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.537926 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.556576 4768 scope.go:117] "RemoveContainer" containerID="218ba0dafae9c66ef73ef494ab1ea672a5c04dfe0553e9bfad4a0ab14e32d827" Feb 23 18:53:06 crc kubenswrapper[4768]: E0223 18:53:06.556993 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"218ba0dafae9c66ef73ef494ab1ea672a5c04dfe0553e9bfad4a0ab14e32d827\": container with ID starting with 218ba0dafae9c66ef73ef494ab1ea672a5c04dfe0553e9bfad4a0ab14e32d827 not found: ID does not exist" containerID="218ba0dafae9c66ef73ef494ab1ea672a5c04dfe0553e9bfad4a0ab14e32d827" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.557044 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"218ba0dafae9c66ef73ef494ab1ea672a5c04dfe0553e9bfad4a0ab14e32d827"} err="failed to get container status \"218ba0dafae9c66ef73ef494ab1ea672a5c04dfe0553e9bfad4a0ab14e32d827\": rpc error: code = NotFound desc = could not find container \"218ba0dafae9c66ef73ef494ab1ea672a5c04dfe0553e9bfad4a0ab14e32d827\": container with ID starting with 218ba0dafae9c66ef73ef494ab1ea672a5c04dfe0553e9bfad4a0ab14e32d827 not found: ID does not exist" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.557071 4768 scope.go:117] "RemoveContainer" containerID="2451098a98e5f031e62abd096cc91cd659c10a1655552ee4dc89af815af7710b" Feb 23 18:53:06 crc kubenswrapper[4768]: E0223 18:53:06.557364 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2451098a98e5f031e62abd096cc91cd659c10a1655552ee4dc89af815af7710b\": container with ID starting with 2451098a98e5f031e62abd096cc91cd659c10a1655552ee4dc89af815af7710b not found: ID does not exist" containerID="2451098a98e5f031e62abd096cc91cd659c10a1655552ee4dc89af815af7710b" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.557394 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2451098a98e5f031e62abd096cc91cd659c10a1655552ee4dc89af815af7710b"} err="failed to get container status \"2451098a98e5f031e62abd096cc91cd659c10a1655552ee4dc89af815af7710b\": rpc error: code = NotFound desc = could not find container \"2451098a98e5f031e62abd096cc91cd659c10a1655552ee4dc89af815af7710b\": container with ID starting with 2451098a98e5f031e62abd096cc91cd659c10a1655552ee4dc89af815af7710b not found: ID does not exist" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.569189 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 18:53:06 crc kubenswrapper[4768]: E0223 18:53:06.569773 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1cfc59-e792-45d5-a595-4c4329a4a4b6" containerName="nova-api-api" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.569797 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1cfc59-e792-45d5-a595-4c4329a4a4b6" containerName="nova-api-api" Feb 23 18:53:06 crc kubenswrapper[4768]: E0223 18:53:06.569849 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1cfc59-e792-45d5-a595-4c4329a4a4b6" containerName="nova-api-log" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.569861 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1cfc59-e792-45d5-a595-4c4329a4a4b6" containerName="nova-api-log" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.570114 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e1cfc59-e792-45d5-a595-4c4329a4a4b6" containerName="nova-api-log" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.570142 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e1cfc59-e792-45d5-a595-4c4329a4a4b6" containerName="nova-api-api" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.571476 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.576800 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.576879 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.577031 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.580701 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.614120 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.614525 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm5h9\" (UniqueName: \"kubernetes.io/projected/7227c72c-da97-4e44-8887-7b2b26d3da8b-kube-api-access-cm5h9\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.614604 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.614650 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7227c72c-da97-4e44-8887-7b2b26d3da8b-logs\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.614708 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-public-tls-certs\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.614777 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-config-data\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.717403 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-public-tls-certs\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.717641 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-config-data\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.718312 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.718502 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm5h9\" (UniqueName: \"kubernetes.io/projected/7227c72c-da97-4e44-8887-7b2b26d3da8b-kube-api-access-cm5h9\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.718592 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.718648 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7227c72c-da97-4e44-8887-7b2b26d3da8b-logs\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.719666 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7227c72c-da97-4e44-8887-7b2b26d3da8b-logs\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.721943 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-public-tls-certs\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.723966 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.727348 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.727962 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-config-data\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.766982 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm5h9\" (UniqueName: \"kubernetes.io/projected/7227c72c-da97-4e44-8887-7b2b26d3da8b-kube-api-access-cm5h9\") pod \"nova-api-0\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.800395 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-n2g99"] Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.801653 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.809860 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n2g99"] Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.825849 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.826115 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.895651 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.924287 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2j9t\" (UniqueName: \"kubernetes.io/projected/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-kube-api-access-f2j9t\") pod \"nova-cell1-cell-mapping-n2g99\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.924338 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n2g99\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.924425 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-scripts\") pod \"nova-cell1-cell-mapping-n2g99\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:06 crc kubenswrapper[4768]: I0223 18:53:06.924477 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-config-data\") pod \"nova-cell1-cell-mapping-n2g99\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:07 crc kubenswrapper[4768]: I0223 18:53:07.026528 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2j9t\" (UniqueName: \"kubernetes.io/projected/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-kube-api-access-f2j9t\") pod \"nova-cell1-cell-mapping-n2g99\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:07 crc kubenswrapper[4768]: I0223 18:53:07.027060 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n2g99\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:07 crc kubenswrapper[4768]: I0223 18:53:07.027359 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-scripts\") pod \"nova-cell1-cell-mapping-n2g99\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:07 crc kubenswrapper[4768]: I0223 18:53:07.027407 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-config-data\") pod \"nova-cell1-cell-mapping-n2g99\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:07 crc kubenswrapper[4768]: I0223 18:53:07.035089 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-config-data\") pod \"nova-cell1-cell-mapping-n2g99\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:07 crc kubenswrapper[4768]: I0223 18:53:07.039590 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n2g99\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:07 crc kubenswrapper[4768]: I0223 18:53:07.041073 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-scripts\") pod \"nova-cell1-cell-mapping-n2g99\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:07 crc kubenswrapper[4768]: I0223 18:53:07.066817 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2j9t\" (UniqueName: \"kubernetes.io/projected/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-kube-api-access-f2j9t\") pod \"nova-cell1-cell-mapping-n2g99\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:07 crc kubenswrapper[4768]: I0223 18:53:07.184480 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:07 crc kubenswrapper[4768]: I0223 18:53:07.362470 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e1cfc59-e792-45d5-a595-4c4329a4a4b6" path="/var/lib/kubelet/pods/8e1cfc59-e792-45d5-a595-4c4329a4a4b6/volumes" Feb 23 18:53:07 crc kubenswrapper[4768]: I0223 18:53:07.500595 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:53:07 crc kubenswrapper[4768]: I0223 18:53:07.511336 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f21626c4-fa00-4c3f-816f-fb1b27274150","Type":"ContainerStarted","Data":"82fa4221546de4c5ee73c233c6b679d35faa8293f26e94f1d5734bf1bccca6a1"} Feb 23 18:53:07 crc kubenswrapper[4768]: W0223 18:53:07.512593 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7227c72c_da97_4e44_8887_7b2b26d3da8b.slice/crio-c8233b17439a6ca9d7c9bea4752c7fe8604d3b23f066e05001947362afc610d0 WatchSource:0}: Error finding container c8233b17439a6ca9d7c9bea4752c7fe8604d3b23f066e05001947362afc610d0: Status 404 returned error can't find the container with id c8233b17439a6ca9d7c9bea4752c7fe8604d3b23f066e05001947362afc610d0 Feb 23 18:53:07 crc kubenswrapper[4768]: I0223 18:53:07.711365 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n2g99"] Feb 23 18:53:08 crc kubenswrapper[4768]: I0223 18:53:08.527416 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n2g99" event={"ID":"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7","Type":"ContainerStarted","Data":"711b65af74f086abcc854f43ef9c992273c395b699345777aecec2930d31774c"} Feb 23 18:53:08 crc kubenswrapper[4768]: I0223 18:53:08.527715 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n2g99" event={"ID":"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7","Type":"ContainerStarted","Data":"804edb81a1879e375562dc944b524f33b36b8e083bfe26f350cd6a32ae952ebd"} Feb 23 18:53:08 crc kubenswrapper[4768]: I0223 18:53:08.533149 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f21626c4-fa00-4c3f-816f-fb1b27274150","Type":"ContainerStarted","Data":"6fbcefd90a1f0ed51dbcbe89f7eefa6f52a6f9090a143e5bb1e180f6866d8542"} Feb 23 18:53:08 crc kubenswrapper[4768]: I0223 18:53:08.540232 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7227c72c-da97-4e44-8887-7b2b26d3da8b","Type":"ContainerStarted","Data":"4545551080e46f997cc29eab9f3dcb98375499a086959766efb9a7d76a859e95"} Feb 23 18:53:08 crc kubenswrapper[4768]: I0223 18:53:08.540308 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7227c72c-da97-4e44-8887-7b2b26d3da8b","Type":"ContainerStarted","Data":"2efda0ea4037d49324c4e8fd0e189f57a41090da3700313705524880b7a59562"} Feb 23 18:53:08 crc kubenswrapper[4768]: I0223 18:53:08.540320 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7227c72c-da97-4e44-8887-7b2b26d3da8b","Type":"ContainerStarted","Data":"c8233b17439a6ca9d7c9bea4752c7fe8604d3b23f066e05001947362afc610d0"} Feb 23 18:53:08 crc kubenswrapper[4768]: I0223 18:53:08.553823 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-n2g99" podStartSLOduration=2.553800914 podStartE2EDuration="2.553800914s" podCreationTimestamp="2026-02-23 18:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:53:08.545521918 +0000 UTC m=+1183.936007718" watchObservedRunningTime="2026-02-23 18:53:08.553800914 +0000 UTC m=+1183.944286714" Feb 23 18:53:08 crc kubenswrapper[4768]: I0223 18:53:08.581489 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.58146589 podStartE2EDuration="2.58146589s" podCreationTimestamp="2026-02-23 18:53:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:53:08.563465908 +0000 UTC m=+1183.953951708" watchObservedRunningTime="2026-02-23 18:53:08.58146589 +0000 UTC m=+1183.971951690" Feb 23 18:53:09 crc kubenswrapper[4768]: I0223 18:53:09.545284 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:53:09 crc kubenswrapper[4768]: I0223 18:53:09.545776 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.052449 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.143593 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ckv2h"] Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.143941 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" podUID="95585638-93ab-482b-8618-20d1e1d2b01b" containerName="dnsmasq-dns" containerID="cri-o://b747ef83a56779637975ed5d96d012c32b91d295deda16d4037d359aab91e76a" gracePeriod=10 Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.573327 4768 generic.go:334] "Generic (PLEG): container finished" podID="95585638-93ab-482b-8618-20d1e1d2b01b" containerID="b747ef83a56779637975ed5d96d012c32b91d295deda16d4037d359aab91e76a" exitCode=0 Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.573395 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" event={"ID":"95585638-93ab-482b-8618-20d1e1d2b01b","Type":"ContainerDied","Data":"b747ef83a56779637975ed5d96d012c32b91d295deda16d4037d359aab91e76a"} Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.582900 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f21626c4-fa00-4c3f-816f-fb1b27274150","Type":"ContainerStarted","Data":"c6e7a9b66d2250d2d559c9e3f04cf4e4171be2eb7100baee26e37a967f56f49e"} Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.583161 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="ceilometer-central-agent" containerID="cri-o://9a02329cf4330379c24f2953dd8142a088b8e510800674af4a32fbe1ea54c7cc" gracePeriod=30 Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.583632 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.584041 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="sg-core" containerID="cri-o://6fbcefd90a1f0ed51dbcbe89f7eefa6f52a6f9090a143e5bb1e180f6866d8542" gracePeriod=30 Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.584052 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="proxy-httpd" containerID="cri-o://c6e7a9b66d2250d2d559c9e3f04cf4e4171be2eb7100baee26e37a967f56f49e" gracePeriod=30 Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.584130 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="ceilometer-notification-agent" containerID="cri-o://82fa4221546de4c5ee73c233c6b679d35faa8293f26e94f1d5734bf1bccca6a1" gracePeriod=30 Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.645524 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.453056208 podStartE2EDuration="6.645500882s" podCreationTimestamp="2026-02-23 18:53:04 +0000 UTC" firstStartedPulling="2026-02-23 18:53:05.327905236 +0000 UTC m=+1180.718391076" lastFinishedPulling="2026-02-23 18:53:09.52034996 +0000 UTC m=+1184.910835750" observedRunningTime="2026-02-23 18:53:10.632574349 +0000 UTC m=+1186.023060149" watchObservedRunningTime="2026-02-23 18:53:10.645500882 +0000 UTC m=+1186.035986682" Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.737739 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.771570 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.860482 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-dns-swift-storage-0\") pod \"95585638-93ab-482b-8618-20d1e1d2b01b\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.861040 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-config\") pod \"95585638-93ab-482b-8618-20d1e1d2b01b\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.861300 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-ovsdbserver-nb\") pod \"95585638-93ab-482b-8618-20d1e1d2b01b\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.861360 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd6qn\" (UniqueName: \"kubernetes.io/projected/95585638-93ab-482b-8618-20d1e1d2b01b-kube-api-access-nd6qn\") pod \"95585638-93ab-482b-8618-20d1e1d2b01b\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.861429 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-ovsdbserver-sb\") pod \"95585638-93ab-482b-8618-20d1e1d2b01b\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.861486 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-dns-svc\") pod \"95585638-93ab-482b-8618-20d1e1d2b01b\" (UID: \"95585638-93ab-482b-8618-20d1e1d2b01b\") " Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.938474 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95585638-93ab-482b-8618-20d1e1d2b01b-kube-api-access-nd6qn" (OuterVolumeSpecName: "kube-api-access-nd6qn") pod "95585638-93ab-482b-8618-20d1e1d2b01b" (UID: "95585638-93ab-482b-8618-20d1e1d2b01b"). InnerVolumeSpecName "kube-api-access-nd6qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:53:10 crc kubenswrapper[4768]: I0223 18:53:10.975995 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd6qn\" (UniqueName: \"kubernetes.io/projected/95585638-93ab-482b-8618-20d1e1d2b01b-kube-api-access-nd6qn\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.094367 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95585638-93ab-482b-8618-20d1e1d2b01b" (UID: "95585638-93ab-482b-8618-20d1e1d2b01b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.149475 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-config" (OuterVolumeSpecName: "config") pod "95585638-93ab-482b-8618-20d1e1d2b01b" (UID: "95585638-93ab-482b-8618-20d1e1d2b01b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.161915 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95585638-93ab-482b-8618-20d1e1d2b01b" (UID: "95585638-93ab-482b-8618-20d1e1d2b01b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.173660 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "95585638-93ab-482b-8618-20d1e1d2b01b" (UID: "95585638-93ab-482b-8618-20d1e1d2b01b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.188896 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.188939 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.188951 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.188963 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.192954 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95585638-93ab-482b-8618-20d1e1d2b01b" (UID: "95585638-93ab-482b-8618-20d1e1d2b01b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.291348 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95585638-93ab-482b-8618-20d1e1d2b01b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.595085 4768 generic.go:334] "Generic (PLEG): container finished" podID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerID="6fbcefd90a1f0ed51dbcbe89f7eefa6f52a6f9090a143e5bb1e180f6866d8542" exitCode=2 Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.595119 4768 generic.go:334] "Generic (PLEG): container finished" podID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerID="82fa4221546de4c5ee73c233c6b679d35faa8293f26e94f1d5734bf1bccca6a1" exitCode=0 Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.595154 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f21626c4-fa00-4c3f-816f-fb1b27274150","Type":"ContainerDied","Data":"6fbcefd90a1f0ed51dbcbe89f7eefa6f52a6f9090a143e5bb1e180f6866d8542"} Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.595182 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f21626c4-fa00-4c3f-816f-fb1b27274150","Type":"ContainerDied","Data":"82fa4221546de4c5ee73c233c6b679d35faa8293f26e94f1d5734bf1bccca6a1"} Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.598526 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" event={"ID":"95585638-93ab-482b-8618-20d1e1d2b01b","Type":"ContainerDied","Data":"1053353b5e81cd658c3996d2997449f26761a182b28ea790216bc54d1f544784"} Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.598569 4768 scope.go:117] "RemoveContainer" containerID="b747ef83a56779637975ed5d96d012c32b91d295deda16d4037d359aab91e76a" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.598727 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-ckv2h" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.625939 4768 scope.go:117] "RemoveContainer" containerID="049c55e34f4202e21adfaa9c3283e810621bca484b8d06089d147226159139f5" Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.633945 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ckv2h"] Feb 23 18:53:11 crc kubenswrapper[4768]: I0223 18:53:11.644551 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-ckv2h"] Feb 23 18:53:12 crc kubenswrapper[4768]: I0223 18:53:12.611435 4768 generic.go:334] "Generic (PLEG): container finished" podID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerID="c6e7a9b66d2250d2d559c9e3f04cf4e4171be2eb7100baee26e37a967f56f49e" exitCode=0 Feb 23 18:53:12 crc kubenswrapper[4768]: I0223 18:53:12.611877 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f21626c4-fa00-4c3f-816f-fb1b27274150","Type":"ContainerDied","Data":"c6e7a9b66d2250d2d559c9e3f04cf4e4171be2eb7100baee26e37a967f56f49e"} Feb 23 18:53:13 crc kubenswrapper[4768]: I0223 18:53:13.333106 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95585638-93ab-482b-8618-20d1e1d2b01b" path="/var/lib/kubelet/pods/95585638-93ab-482b-8618-20d1e1d2b01b/volumes" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.638533 4768 generic.go:334] "Generic (PLEG): container finished" podID="9eca568c-ae88-4fbc-8f82-a20f41ee0ef7" containerID="711b65af74f086abcc854f43ef9c992273c395b699345777aecec2930d31774c" exitCode=0 Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.639395 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n2g99" event={"ID":"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7","Type":"ContainerDied","Data":"711b65af74f086abcc854f43ef9c992273c395b699345777aecec2930d31774c"} Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.644879 4768 generic.go:334] "Generic (PLEG): container finished" podID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerID="9a02329cf4330379c24f2953dd8142a088b8e510800674af4a32fbe1ea54c7cc" exitCode=0 Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.644940 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f21626c4-fa00-4c3f-816f-fb1b27274150","Type":"ContainerDied","Data":"9a02329cf4330379c24f2953dd8142a088b8e510800674af4a32fbe1ea54c7cc"} Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.644976 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f21626c4-fa00-4c3f-816f-fb1b27274150","Type":"ContainerDied","Data":"10ae469e020afbe7a321620280fb1c4e15adce1165127528c10fb536952691b5"} Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.644991 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10ae469e020afbe7a321620280fb1c4e15adce1165127528c10fb536952691b5" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.672399 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.779319 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-sg-core-conf-yaml\") pod \"f21626c4-fa00-4c3f-816f-fb1b27274150\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.779450 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f21626c4-fa00-4c3f-816f-fb1b27274150-log-httpd\") pod \"f21626c4-fa00-4c3f-816f-fb1b27274150\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.779544 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-combined-ca-bundle\") pod \"f21626c4-fa00-4c3f-816f-fb1b27274150\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.779597 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-ceilometer-tls-certs\") pod \"f21626c4-fa00-4c3f-816f-fb1b27274150\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.779634 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-config-data\") pod \"f21626c4-fa00-4c3f-816f-fb1b27274150\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.779902 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-scripts\") pod \"f21626c4-fa00-4c3f-816f-fb1b27274150\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.779944 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f21626c4-fa00-4c3f-816f-fb1b27274150-run-httpd\") pod \"f21626c4-fa00-4c3f-816f-fb1b27274150\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.779973 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25cxv\" (UniqueName: \"kubernetes.io/projected/f21626c4-fa00-4c3f-816f-fb1b27274150-kube-api-access-25cxv\") pod \"f21626c4-fa00-4c3f-816f-fb1b27274150\" (UID: \"f21626c4-fa00-4c3f-816f-fb1b27274150\") " Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.780549 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f21626c4-fa00-4c3f-816f-fb1b27274150-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f21626c4-fa00-4c3f-816f-fb1b27274150" (UID: "f21626c4-fa00-4c3f-816f-fb1b27274150"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.780693 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f21626c4-fa00-4c3f-816f-fb1b27274150-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f21626c4-fa00-4c3f-816f-fb1b27274150" (UID: "f21626c4-fa00-4c3f-816f-fb1b27274150"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.790467 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-scripts" (OuterVolumeSpecName: "scripts") pod "f21626c4-fa00-4c3f-816f-fb1b27274150" (UID: "f21626c4-fa00-4c3f-816f-fb1b27274150"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.792565 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f21626c4-fa00-4c3f-816f-fb1b27274150-kube-api-access-25cxv" (OuterVolumeSpecName: "kube-api-access-25cxv") pod "f21626c4-fa00-4c3f-816f-fb1b27274150" (UID: "f21626c4-fa00-4c3f-816f-fb1b27274150"). InnerVolumeSpecName "kube-api-access-25cxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.839195 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f21626c4-fa00-4c3f-816f-fb1b27274150" (UID: "f21626c4-fa00-4c3f-816f-fb1b27274150"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.879377 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f21626c4-fa00-4c3f-816f-fb1b27274150" (UID: "f21626c4-fa00-4c3f-816f-fb1b27274150"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.882237 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.882276 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.882289 4768 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f21626c4-fa00-4c3f-816f-fb1b27274150-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.882298 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25cxv\" (UniqueName: \"kubernetes.io/projected/f21626c4-fa00-4c3f-816f-fb1b27274150-kube-api-access-25cxv\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.882317 4768 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.882327 4768 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f21626c4-fa00-4c3f-816f-fb1b27274150-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.914006 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f21626c4-fa00-4c3f-816f-fb1b27274150" (UID: "f21626c4-fa00-4c3f-816f-fb1b27274150"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.958597 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-config-data" (OuterVolumeSpecName: "config-data") pod "f21626c4-fa00-4c3f-816f-fb1b27274150" (UID: "f21626c4-fa00-4c3f-816f-fb1b27274150"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.983662 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:14 crc kubenswrapper[4768]: I0223 18:53:14.983947 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f21626c4-fa00-4c3f-816f-fb1b27274150-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.655965 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.707465 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.728763 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.736922 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:53:15 crc kubenswrapper[4768]: E0223 18:53:15.737396 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95585638-93ab-482b-8618-20d1e1d2b01b" containerName="dnsmasq-dns" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.737418 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="95585638-93ab-482b-8618-20d1e1d2b01b" containerName="dnsmasq-dns" Feb 23 18:53:15 crc kubenswrapper[4768]: E0223 18:53:15.737437 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="proxy-httpd" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.737446 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="proxy-httpd" Feb 23 18:53:15 crc kubenswrapper[4768]: E0223 18:53:15.737455 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95585638-93ab-482b-8618-20d1e1d2b01b" containerName="init" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.737464 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="95585638-93ab-482b-8618-20d1e1d2b01b" containerName="init" Feb 23 18:53:15 crc kubenswrapper[4768]: E0223 18:53:15.737485 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="ceilometer-central-agent" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.737492 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="ceilometer-central-agent" Feb 23 18:53:15 crc kubenswrapper[4768]: E0223 18:53:15.737507 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="sg-core" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.737517 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="sg-core" Feb 23 18:53:15 crc kubenswrapper[4768]: E0223 18:53:15.737532 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="ceilometer-notification-agent" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.737540 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="ceilometer-notification-agent" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.737744 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="ceilometer-central-agent" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.737762 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="proxy-httpd" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.737778 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="sg-core" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.737789 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="95585638-93ab-482b-8618-20d1e1d2b01b" containerName="dnsmasq-dns" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.737803 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" containerName="ceilometer-notification-agent" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.739826 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.745997 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.746353 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.746620 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.761342 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.804484 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzlnk\" (UniqueName: \"kubernetes.io/projected/19bdd7e2-6cde-4412-b74b-eedc6428ac63-kube-api-access-hzlnk\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.805002 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.805028 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.805057 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-scripts\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.805127 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19bdd7e2-6cde-4412-b74b-eedc6428ac63-run-httpd\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.805163 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-config-data\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.805190 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19bdd7e2-6cde-4412-b74b-eedc6428ac63-log-httpd\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.805212 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.906802 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19bdd7e2-6cde-4412-b74b-eedc6428ac63-run-httpd\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.906873 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-config-data\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.906908 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19bdd7e2-6cde-4412-b74b-eedc6428ac63-log-httpd\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.906928 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.907037 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzlnk\" (UniqueName: \"kubernetes.io/projected/19bdd7e2-6cde-4412-b74b-eedc6428ac63-kube-api-access-hzlnk\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.907060 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.907079 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.907108 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-scripts\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.907345 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19bdd7e2-6cde-4412-b74b-eedc6428ac63-run-httpd\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.907616 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/19bdd7e2-6cde-4412-b74b-eedc6428ac63-log-httpd\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.913206 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.913916 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.915280 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-config-data\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.920732 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.923004 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/19bdd7e2-6cde-4412-b74b-eedc6428ac63-scripts\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:15 crc kubenswrapper[4768]: I0223 18:53:15.928809 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzlnk\" (UniqueName: \"kubernetes.io/projected/19bdd7e2-6cde-4412-b74b-eedc6428ac63-kube-api-access-hzlnk\") pod \"ceilometer-0\" (UID: \"19bdd7e2-6cde-4412-b74b-eedc6428ac63\") " pod="openstack/ceilometer-0" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.066280 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.157026 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.215798 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-config-data\") pod \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.215923 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-combined-ca-bundle\") pod \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.215985 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2j9t\" (UniqueName: \"kubernetes.io/projected/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-kube-api-access-f2j9t\") pod \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.216043 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-scripts\") pod \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\" (UID: \"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7\") " Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.220105 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-scripts" (OuterVolumeSpecName: "scripts") pod "9eca568c-ae88-4fbc-8f82-a20f41ee0ef7" (UID: "9eca568c-ae88-4fbc-8f82-a20f41ee0ef7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.221407 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-kube-api-access-f2j9t" (OuterVolumeSpecName: "kube-api-access-f2j9t") pod "9eca568c-ae88-4fbc-8f82-a20f41ee0ef7" (UID: "9eca568c-ae88-4fbc-8f82-a20f41ee0ef7"). InnerVolumeSpecName "kube-api-access-f2j9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.258510 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-config-data" (OuterVolumeSpecName: "config-data") pod "9eca568c-ae88-4fbc-8f82-a20f41ee0ef7" (UID: "9eca568c-ae88-4fbc-8f82-a20f41ee0ef7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.269767 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eca568c-ae88-4fbc-8f82-a20f41ee0ef7" (UID: "9eca568c-ae88-4fbc-8f82-a20f41ee0ef7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.319863 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.319915 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.319932 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2j9t\" (UniqueName: \"kubernetes.io/projected/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-kube-api-access-f2j9t\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.319941 4768 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7-scripts\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.542353 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.672646 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n2g99" event={"ID":"9eca568c-ae88-4fbc-8f82-a20f41ee0ef7","Type":"ContainerDied","Data":"804edb81a1879e375562dc944b524f33b36b8e083bfe26f350cd6a32ae952ebd"} Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.672739 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="804edb81a1879e375562dc944b524f33b36b8e083bfe26f350cd6a32ae952ebd" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.672673 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n2g99" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.674567 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19bdd7e2-6cde-4412-b74b-eedc6428ac63","Type":"ContainerStarted","Data":"b53d13baf8cb7f97cc3c08b2fe505c94202ce3da90a7f7943a8f26d95719d80f"} Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.897218 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.897322 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.943460 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.943843 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="59205d38-cfa3-4689-b3df-087dbf419370" containerName="nova-scheduler-scheduler" containerID="cri-o://fc09d72f93432c0e8bea60ec5e9a60addb173db2a97ae49c1eeed13e3d3a469a" gracePeriod=30 Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.959766 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.974812 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.975146 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerName="nova-metadata-log" containerID="cri-o://155186b341f19d0b80d6fe7f4399f5a972384026596dd83f9f0dc6a8f4585654" gracePeriod=30 Feb 23 18:53:16 crc kubenswrapper[4768]: I0223 18:53:16.975840 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerName="nova-metadata-metadata" containerID="cri-o://1ece9b7eb4b2a940f2e13ff9e89c425d41664016486b2e8892bf63eeb9fca414" gracePeriod=30 Feb 23 18:53:17 crc kubenswrapper[4768]: I0223 18:53:17.367852 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f21626c4-fa00-4c3f-816f-fb1b27274150" path="/var/lib/kubelet/pods/f21626c4-fa00-4c3f-816f-fb1b27274150/volumes" Feb 23 18:53:17 crc kubenswrapper[4768]: I0223 18:53:17.690452 4768 generic.go:334] "Generic (PLEG): container finished" podID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerID="155186b341f19d0b80d6fe7f4399f5a972384026596dd83f9f0dc6a8f4585654" exitCode=143 Feb 23 18:53:17 crc kubenswrapper[4768]: I0223 18:53:17.690734 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50c2bf43-618c-44de-8b37-d017a5cc896a","Type":"ContainerDied","Data":"155186b341f19d0b80d6fe7f4399f5a972384026596dd83f9f0dc6a8f4585654"} Feb 23 18:53:17 crc kubenswrapper[4768]: I0223 18:53:17.693992 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19bdd7e2-6cde-4412-b74b-eedc6428ac63","Type":"ContainerStarted","Data":"e656ca759c07f5145fbb982fbe579b744253f42d256b5572da430515cce53c84"} Feb 23 18:53:17 crc kubenswrapper[4768]: I0223 18:53:17.694181 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7227c72c-da97-4e44-8887-7b2b26d3da8b" containerName="nova-api-log" containerID="cri-o://2efda0ea4037d49324c4e8fd0e189f57a41090da3700313705524880b7a59562" gracePeriod=30 Feb 23 18:53:17 crc kubenswrapper[4768]: I0223 18:53:17.694272 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7227c72c-da97-4e44-8887-7b2b26d3da8b" containerName="nova-api-api" containerID="cri-o://4545551080e46f997cc29eab9f3dcb98375499a086959766efb9a7d76a859e95" gracePeriod=30 Feb 23 18:53:17 crc kubenswrapper[4768]: I0223 18:53:17.702280 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7227c72c-da97-4e44-8887-7b2b26d3da8b" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": EOF" Feb 23 18:53:17 crc kubenswrapper[4768]: I0223 18:53:17.702292 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7227c72c-da97-4e44-8887-7b2b26d3da8b" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": EOF" Feb 23 18:53:18 crc kubenswrapper[4768]: I0223 18:53:18.706849 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19bdd7e2-6cde-4412-b74b-eedc6428ac63","Type":"ContainerStarted","Data":"787564f85d9da0d464a4425a43453d4b1a4344cd50611b6f191ce191b94435da"} Feb 23 18:53:18 crc kubenswrapper[4768]: I0223 18:53:18.707634 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19bdd7e2-6cde-4412-b74b-eedc6428ac63","Type":"ContainerStarted","Data":"f61b900489b202bff0ea85f2fe6a48b62c1a30b76be5ce6ea43ef611895b0ffe"} Feb 23 18:53:18 crc kubenswrapper[4768]: I0223 18:53:18.709920 4768 generic.go:334] "Generic (PLEG): container finished" podID="7227c72c-da97-4e44-8887-7b2b26d3da8b" containerID="2efda0ea4037d49324c4e8fd0e189f57a41090da3700313705524880b7a59562" exitCode=143 Feb 23 18:53:18 crc kubenswrapper[4768]: I0223 18:53:18.709985 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7227c72c-da97-4e44-8887-7b2b26d3da8b","Type":"ContainerDied","Data":"2efda0ea4037d49324c4e8fd0e189f57a41090da3700313705524880b7a59562"} Feb 23 18:53:20 crc kubenswrapper[4768]: I0223 18:53:20.147485 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:58176->10.217.0.198:8775: read: connection reset by peer" Feb 23 18:53:20 crc kubenswrapper[4768]: I0223 18:53:20.148182 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:58188->10.217.0.198:8775: read: connection reset by peer" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.656628 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.744924 4768 generic.go:334] "Generic (PLEG): container finished" podID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerID="1ece9b7eb4b2a940f2e13ff9e89c425d41664016486b2e8892bf63eeb9fca414" exitCode=0 Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.744989 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50c2bf43-618c-44de-8b37-d017a5cc896a","Type":"ContainerDied","Data":"1ece9b7eb4b2a940f2e13ff9e89c425d41664016486b2e8892bf63eeb9fca414"} Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.745029 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"50c2bf43-618c-44de-8b37-d017a5cc896a","Type":"ContainerDied","Data":"421df262b3c66d2fdd33f6e65ae40c508d5cb659b739cf8bb33eaa3d3fe7e8cc"} Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.745054 4768 scope.go:117] "RemoveContainer" containerID="1ece9b7eb4b2a940f2e13ff9e89c425d41664016486b2e8892bf63eeb9fca414" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.745346 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.745450 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-nova-metadata-tls-certs\") pod \"50c2bf43-618c-44de-8b37-d017a5cc896a\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.745689 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-combined-ca-bundle\") pod \"50c2bf43-618c-44de-8b37-d017a5cc896a\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.745762 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlcf9\" (UniqueName: \"kubernetes.io/projected/50c2bf43-618c-44de-8b37-d017a5cc896a-kube-api-access-wlcf9\") pod \"50c2bf43-618c-44de-8b37-d017a5cc896a\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.745912 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c2bf43-618c-44de-8b37-d017a5cc896a-logs\") pod \"50c2bf43-618c-44de-8b37-d017a5cc896a\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.746363 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-config-data\") pod \"50c2bf43-618c-44de-8b37-d017a5cc896a\" (UID: \"50c2bf43-618c-44de-8b37-d017a5cc896a\") " Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.749445 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50c2bf43-618c-44de-8b37-d017a5cc896a-logs" (OuterVolumeSpecName: "logs") pod "50c2bf43-618c-44de-8b37-d017a5cc896a" (UID: "50c2bf43-618c-44de-8b37-d017a5cc896a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.820818 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c2bf43-618c-44de-8b37-d017a5cc896a-kube-api-access-wlcf9" (OuterVolumeSpecName: "kube-api-access-wlcf9") pod "50c2bf43-618c-44de-8b37-d017a5cc896a" (UID: "50c2bf43-618c-44de-8b37-d017a5cc896a"). InnerVolumeSpecName "kube-api-access-wlcf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.827325 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-config-data" (OuterVolumeSpecName: "config-data") pod "50c2bf43-618c-44de-8b37-d017a5cc896a" (UID: "50c2bf43-618c-44de-8b37-d017a5cc896a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.827418 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50c2bf43-618c-44de-8b37-d017a5cc896a" (UID: "50c2bf43-618c-44de-8b37-d017a5cc896a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.855773 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.855810 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlcf9\" (UniqueName: \"kubernetes.io/projected/50c2bf43-618c-44de-8b37-d017a5cc896a-kube-api-access-wlcf9\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.855824 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50c2bf43-618c-44de-8b37-d017a5cc896a-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.855834 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.860424 4768 scope.go:117] "RemoveContainer" containerID="155186b341f19d0b80d6fe7f4399f5a972384026596dd83f9f0dc6a8f4585654" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.860488 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "50c2bf43-618c-44de-8b37-d017a5cc896a" (UID: "50c2bf43-618c-44de-8b37-d017a5cc896a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.891951 4768 scope.go:117] "RemoveContainer" containerID="1ece9b7eb4b2a940f2e13ff9e89c425d41664016486b2e8892bf63eeb9fca414" Feb 23 18:53:21 crc kubenswrapper[4768]: E0223 18:53:20.892558 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ece9b7eb4b2a940f2e13ff9e89c425d41664016486b2e8892bf63eeb9fca414\": container with ID starting with 1ece9b7eb4b2a940f2e13ff9e89c425d41664016486b2e8892bf63eeb9fca414 not found: ID does not exist" containerID="1ece9b7eb4b2a940f2e13ff9e89c425d41664016486b2e8892bf63eeb9fca414" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.892610 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ece9b7eb4b2a940f2e13ff9e89c425d41664016486b2e8892bf63eeb9fca414"} err="failed to get container status \"1ece9b7eb4b2a940f2e13ff9e89c425d41664016486b2e8892bf63eeb9fca414\": rpc error: code = NotFound desc = could not find container \"1ece9b7eb4b2a940f2e13ff9e89c425d41664016486b2e8892bf63eeb9fca414\": container with ID starting with 1ece9b7eb4b2a940f2e13ff9e89c425d41664016486b2e8892bf63eeb9fca414 not found: ID does not exist" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.892653 4768 scope.go:117] "RemoveContainer" containerID="155186b341f19d0b80d6fe7f4399f5a972384026596dd83f9f0dc6a8f4585654" Feb 23 18:53:21 crc kubenswrapper[4768]: E0223 18:53:20.894630 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"155186b341f19d0b80d6fe7f4399f5a972384026596dd83f9f0dc6a8f4585654\": container with ID starting with 155186b341f19d0b80d6fe7f4399f5a972384026596dd83f9f0dc6a8f4585654 not found: ID does not exist" containerID="155186b341f19d0b80d6fe7f4399f5a972384026596dd83f9f0dc6a8f4585654" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.894682 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155186b341f19d0b80d6fe7f4399f5a972384026596dd83f9f0dc6a8f4585654"} err="failed to get container status \"155186b341f19d0b80d6fe7f4399f5a972384026596dd83f9f0dc6a8f4585654\": rpc error: code = NotFound desc = could not find container \"155186b341f19d0b80d6fe7f4399f5a972384026596dd83f9f0dc6a8f4585654\": container with ID starting with 155186b341f19d0b80d6fe7f4399f5a972384026596dd83f9f0dc6a8f4585654 not found: ID does not exist" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:20.957264 4768 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c2bf43-618c-44de-8b37-d017a5cc896a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.092007 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.120556 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.128725 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:53:21 crc kubenswrapper[4768]: E0223 18:53:21.129578 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eca568c-ae88-4fbc-8f82-a20f41ee0ef7" containerName="nova-manage" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.129599 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eca568c-ae88-4fbc-8f82-a20f41ee0ef7" containerName="nova-manage" Feb 23 18:53:21 crc kubenswrapper[4768]: E0223 18:53:21.129643 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerName="nova-metadata-metadata" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.129652 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerName="nova-metadata-metadata" Feb 23 18:53:21 crc kubenswrapper[4768]: E0223 18:53:21.129675 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerName="nova-metadata-log" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.129682 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerName="nova-metadata-log" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.130050 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerName="nova-metadata-metadata" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.130079 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eca568c-ae88-4fbc-8f82-a20f41ee0ef7" containerName="nova-manage" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.130099 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c2bf43-618c-44de-8b37-d017a5cc896a" containerName="nova-metadata-log" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.131911 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.149259 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.149592 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.151536 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.191783 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08feb509-1dff-446f-bdf1-47c5bc09f772-config-data\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.192026 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08feb509-1dff-446f-bdf1-47c5bc09f772-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.192112 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g624b\" (UniqueName: \"kubernetes.io/projected/08feb509-1dff-446f-bdf1-47c5bc09f772-kube-api-access-g624b\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.192167 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08feb509-1dff-446f-bdf1-47c5bc09f772-logs\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.192372 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/08feb509-1dff-446f-bdf1-47c5bc09f772-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.296900 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08feb509-1dff-446f-bdf1-47c5bc09f772-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.296968 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g624b\" (UniqueName: \"kubernetes.io/projected/08feb509-1dff-446f-bdf1-47c5bc09f772-kube-api-access-g624b\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.297003 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08feb509-1dff-446f-bdf1-47c5bc09f772-logs\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.297058 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/08feb509-1dff-446f-bdf1-47c5bc09f772-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.297119 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08feb509-1dff-446f-bdf1-47c5bc09f772-config-data\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.301036 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08feb509-1dff-446f-bdf1-47c5bc09f772-logs\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.305931 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08feb509-1dff-446f-bdf1-47c5bc09f772-config-data\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.313463 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08feb509-1dff-446f-bdf1-47c5bc09f772-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.313475 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/08feb509-1dff-446f-bdf1-47c5bc09f772-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.320242 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g624b\" (UniqueName: \"kubernetes.io/projected/08feb509-1dff-446f-bdf1-47c5bc09f772-kube-api-access-g624b\") pod \"nova-metadata-0\" (UID: \"08feb509-1dff-446f-bdf1-47c5bc09f772\") " pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.334102 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c2bf43-618c-44de-8b37-d017a5cc896a" path="/var/lib/kubelet/pods/50c2bf43-618c-44de-8b37-d017a5cc896a/volumes" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.477648 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.616857 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.722168 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59205d38-cfa3-4689-b3df-087dbf419370-combined-ca-bundle\") pod \"59205d38-cfa3-4689-b3df-087dbf419370\" (UID: \"59205d38-cfa3-4689-b3df-087dbf419370\") " Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.722266 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9ntf\" (UniqueName: \"kubernetes.io/projected/59205d38-cfa3-4689-b3df-087dbf419370-kube-api-access-g9ntf\") pod \"59205d38-cfa3-4689-b3df-087dbf419370\" (UID: \"59205d38-cfa3-4689-b3df-087dbf419370\") " Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.722316 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59205d38-cfa3-4689-b3df-087dbf419370-config-data\") pod \"59205d38-cfa3-4689-b3df-087dbf419370\" (UID: \"59205d38-cfa3-4689-b3df-087dbf419370\") " Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.728126 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59205d38-cfa3-4689-b3df-087dbf419370-kube-api-access-g9ntf" (OuterVolumeSpecName: "kube-api-access-g9ntf") pod "59205d38-cfa3-4689-b3df-087dbf419370" (UID: "59205d38-cfa3-4689-b3df-087dbf419370"). InnerVolumeSpecName "kube-api-access-g9ntf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.763835 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59205d38-cfa3-4689-b3df-087dbf419370-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59205d38-cfa3-4689-b3df-087dbf419370" (UID: "59205d38-cfa3-4689-b3df-087dbf419370"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.783277 4768 generic.go:334] "Generic (PLEG): container finished" podID="59205d38-cfa3-4689-b3df-087dbf419370" containerID="fc09d72f93432c0e8bea60ec5e9a60addb173db2a97ae49c1eeed13e3d3a469a" exitCode=0 Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.783466 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.783473 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59205d38-cfa3-4689-b3df-087dbf419370-config-data" (OuterVolumeSpecName: "config-data") pod "59205d38-cfa3-4689-b3df-087dbf419370" (UID: "59205d38-cfa3-4689-b3df-087dbf419370"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.783562 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59205d38-cfa3-4689-b3df-087dbf419370","Type":"ContainerDied","Data":"fc09d72f93432c0e8bea60ec5e9a60addb173db2a97ae49c1eeed13e3d3a469a"} Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.783659 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"59205d38-cfa3-4689-b3df-087dbf419370","Type":"ContainerDied","Data":"ba5f2dcb415fed1f932c45e413e2ece0c11798147b1752e2b1dba8a5ee39fe0d"} Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.783681 4768 scope.go:117] "RemoveContainer" containerID="fc09d72f93432c0e8bea60ec5e9a60addb173db2a97ae49c1eeed13e3d3a469a" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.811341 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"19bdd7e2-6cde-4412-b74b-eedc6428ac63","Type":"ContainerStarted","Data":"9c7ab789e6c230cad016669537ec76d07916d9c0b7c2be1f9bfe725d3e63d21b"} Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.814307 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.826187 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59205d38-cfa3-4689-b3df-087dbf419370-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.826228 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9ntf\" (UniqueName: \"kubernetes.io/projected/59205d38-cfa3-4689-b3df-087dbf419370-kube-api-access-g9ntf\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.826239 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59205d38-cfa3-4689-b3df-087dbf419370-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.852590 4768 scope.go:117] "RemoveContainer" containerID="fc09d72f93432c0e8bea60ec5e9a60addb173db2a97ae49c1eeed13e3d3a469a" Feb 23 18:53:21 crc kubenswrapper[4768]: E0223 18:53:21.857382 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc09d72f93432c0e8bea60ec5e9a60addb173db2a97ae49c1eeed13e3d3a469a\": container with ID starting with fc09d72f93432c0e8bea60ec5e9a60addb173db2a97ae49c1eeed13e3d3a469a not found: ID does not exist" containerID="fc09d72f93432c0e8bea60ec5e9a60addb173db2a97ae49c1eeed13e3d3a469a" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.857434 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc09d72f93432c0e8bea60ec5e9a60addb173db2a97ae49c1eeed13e3d3a469a"} err="failed to get container status \"fc09d72f93432c0e8bea60ec5e9a60addb173db2a97ae49c1eeed13e3d3a469a\": rpc error: code = NotFound desc = could not find container \"fc09d72f93432c0e8bea60ec5e9a60addb173db2a97ae49c1eeed13e3d3a469a\": container with ID starting with fc09d72f93432c0e8bea60ec5e9a60addb173db2a97ae49c1eeed13e3d3a469a not found: ID does not exist" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.875804 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.393136016 podStartE2EDuration="6.875777733s" podCreationTimestamp="2026-02-23 18:53:15 +0000 UTC" firstStartedPulling="2026-02-23 18:53:16.544920875 +0000 UTC m=+1191.935406685" lastFinishedPulling="2026-02-23 18:53:21.027562602 +0000 UTC m=+1196.418048402" observedRunningTime="2026-02-23 18:53:21.847675625 +0000 UTC m=+1197.238161435" watchObservedRunningTime="2026-02-23 18:53:21.875777733 +0000 UTC m=+1197.266263533" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.897261 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.912983 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.934409 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:53:21 crc kubenswrapper[4768]: E0223 18:53:21.935094 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59205d38-cfa3-4689-b3df-087dbf419370" containerName="nova-scheduler-scheduler" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.935165 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="59205d38-cfa3-4689-b3df-087dbf419370" containerName="nova-scheduler-scheduler" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.935435 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="59205d38-cfa3-4689-b3df-087dbf419370" containerName="nova-scheduler-scheduler" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.936297 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.941120 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 23 18:53:21 crc kubenswrapper[4768]: I0223 18:53:21.945473 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.030093 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fwwj\" (UniqueName: \"kubernetes.io/projected/7a66d66d-e9d1-4407-9e7e-268f1e7f0feb-kube-api-access-7fwwj\") pod \"nova-scheduler-0\" (UID: \"7a66d66d-e9d1-4407-9e7e-268f1e7f0feb\") " pod="openstack/nova-scheduler-0" Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.030172 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a66d66d-e9d1-4407-9e7e-268f1e7f0feb-config-data\") pod \"nova-scheduler-0\" (UID: \"7a66d66d-e9d1-4407-9e7e-268f1e7f0feb\") " pod="openstack/nova-scheduler-0" Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.030943 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a66d66d-e9d1-4407-9e7e-268f1e7f0feb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7a66d66d-e9d1-4407-9e7e-268f1e7f0feb\") " pod="openstack/nova-scheduler-0" Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.133864 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fwwj\" (UniqueName: \"kubernetes.io/projected/7a66d66d-e9d1-4407-9e7e-268f1e7f0feb-kube-api-access-7fwwj\") pod \"nova-scheduler-0\" (UID: \"7a66d66d-e9d1-4407-9e7e-268f1e7f0feb\") " pod="openstack/nova-scheduler-0" Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.133953 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a66d66d-e9d1-4407-9e7e-268f1e7f0feb-config-data\") pod \"nova-scheduler-0\" (UID: \"7a66d66d-e9d1-4407-9e7e-268f1e7f0feb\") " pod="openstack/nova-scheduler-0" Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.134111 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a66d66d-e9d1-4407-9e7e-268f1e7f0feb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7a66d66d-e9d1-4407-9e7e-268f1e7f0feb\") " pod="openstack/nova-scheduler-0" Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.138896 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a66d66d-e9d1-4407-9e7e-268f1e7f0feb-config-data\") pod \"nova-scheduler-0\" (UID: \"7a66d66d-e9d1-4407-9e7e-268f1e7f0feb\") " pod="openstack/nova-scheduler-0" Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.140302 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a66d66d-e9d1-4407-9e7e-268f1e7f0feb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7a66d66d-e9d1-4407-9e7e-268f1e7f0feb\") " pod="openstack/nova-scheduler-0" Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.143206 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.155563 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fwwj\" (UniqueName: \"kubernetes.io/projected/7a66d66d-e9d1-4407-9e7e-268f1e7f0feb-kube-api-access-7fwwj\") pod \"nova-scheduler-0\" (UID: \"7a66d66d-e9d1-4407-9e7e-268f1e7f0feb\") " pod="openstack/nova-scheduler-0" Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.276930 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.823237 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08feb509-1dff-446f-bdf1-47c5bc09f772","Type":"ContainerStarted","Data":"1decd43ece6a4ad6240fa740020afc0d365dd18cdaad0277305254fcb0c63e63"} Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.823878 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08feb509-1dff-446f-bdf1-47c5bc09f772","Type":"ContainerStarted","Data":"a3e3b583b27268f8523402291900e8aba25bebfa03e4a236253a0d2ca44b23a9"} Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.823888 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"08feb509-1dff-446f-bdf1-47c5bc09f772","Type":"ContainerStarted","Data":"bc24b58866fc6991454a88621a11e7910cbd6b8137e69beabacbba56f95eb030"} Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.927676 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.9276517420000001 podStartE2EDuration="1.927651742s" podCreationTimestamp="2026-02-23 18:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:53:22.845735172 +0000 UTC m=+1198.236220972" watchObservedRunningTime="2026-02-23 18:53:22.927651742 +0000 UTC m=+1198.318137532" Feb 23 18:53:22 crc kubenswrapper[4768]: I0223 18:53:22.935097 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 23 18:53:22 crc kubenswrapper[4768]: W0223 18:53:22.935437 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a66d66d_e9d1_4407_9e7e_268f1e7f0feb.slice/crio-532417d0ad34b1ed2103285f9d064ad7e1fcc7e826d2fa9bb5938e2532d5fef0 WatchSource:0}: Error finding container 532417d0ad34b1ed2103285f9d064ad7e1fcc7e826d2fa9bb5938e2532d5fef0: Status 404 returned error can't find the container with id 532417d0ad34b1ed2103285f9d064ad7e1fcc7e826d2fa9bb5938e2532d5fef0 Feb 23 18:53:23 crc kubenswrapper[4768]: I0223 18:53:23.319180 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59205d38-cfa3-4689-b3df-087dbf419370" path="/var/lib/kubelet/pods/59205d38-cfa3-4689-b3df-087dbf419370/volumes" Feb 23 18:53:23 crc kubenswrapper[4768]: I0223 18:53:23.835616 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7a66d66d-e9d1-4407-9e7e-268f1e7f0feb","Type":"ContainerStarted","Data":"4a0a198a0b79a7582c701eb8c6c4a7219e5888b34092fa3463c2792392dfc3d1"} Feb 23 18:53:23 crc kubenswrapper[4768]: I0223 18:53:23.836094 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7a66d66d-e9d1-4407-9e7e-268f1e7f0feb","Type":"ContainerStarted","Data":"532417d0ad34b1ed2103285f9d064ad7e1fcc7e826d2fa9bb5938e2532d5fef0"} Feb 23 18:53:23 crc kubenswrapper[4768]: I0223 18:53:23.863691 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.863670523 podStartE2EDuration="2.863670523s" podCreationTimestamp="2026-02-23 18:53:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:53:23.856927988 +0000 UTC m=+1199.247413788" watchObservedRunningTime="2026-02-23 18:53:23.863670523 +0000 UTC m=+1199.254156333" Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.684351 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.823521 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-combined-ca-bundle\") pod \"7227c72c-da97-4e44-8887-7b2b26d3da8b\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.823579 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm5h9\" (UniqueName: \"kubernetes.io/projected/7227c72c-da97-4e44-8887-7b2b26d3da8b-kube-api-access-cm5h9\") pod \"7227c72c-da97-4e44-8887-7b2b26d3da8b\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.823709 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7227c72c-da97-4e44-8887-7b2b26d3da8b-logs\") pod \"7227c72c-da97-4e44-8887-7b2b26d3da8b\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.823796 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-config-data\") pod \"7227c72c-da97-4e44-8887-7b2b26d3da8b\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.823842 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-internal-tls-certs\") pod \"7227c72c-da97-4e44-8887-7b2b26d3da8b\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.824024 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-public-tls-certs\") pod \"7227c72c-da97-4e44-8887-7b2b26d3da8b\" (UID: \"7227c72c-da97-4e44-8887-7b2b26d3da8b\") " Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.824443 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7227c72c-da97-4e44-8887-7b2b26d3da8b-logs" (OuterVolumeSpecName: "logs") pod "7227c72c-da97-4e44-8887-7b2b26d3da8b" (UID: "7227c72c-da97-4e44-8887-7b2b26d3da8b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.824799 4768 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7227c72c-da97-4e44-8887-7b2b26d3da8b-logs\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.845038 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7227c72c-da97-4e44-8887-7b2b26d3da8b-kube-api-access-cm5h9" (OuterVolumeSpecName: "kube-api-access-cm5h9") pod "7227c72c-da97-4e44-8887-7b2b26d3da8b" (UID: "7227c72c-da97-4e44-8887-7b2b26d3da8b"). InnerVolumeSpecName "kube-api-access-cm5h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.891739 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7227c72c-da97-4e44-8887-7b2b26d3da8b" (UID: "7227c72c-da97-4e44-8887-7b2b26d3da8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.927625 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.927688 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm5h9\" (UniqueName: \"kubernetes.io/projected/7227c72c-da97-4e44-8887-7b2b26d3da8b-kube-api-access-cm5h9\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.962375 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-config-data" (OuterVolumeSpecName: "config-data") pod "7227c72c-da97-4e44-8887-7b2b26d3da8b" (UID: "7227c72c-da97-4e44-8887-7b2b26d3da8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.977567 4768 generic.go:334] "Generic (PLEG): container finished" podID="7227c72c-da97-4e44-8887-7b2b26d3da8b" containerID="4545551080e46f997cc29eab9f3dcb98375499a086959766efb9a7d76a859e95" exitCode=0 Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.978955 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.979411 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7227c72c-da97-4e44-8887-7b2b26d3da8b","Type":"ContainerDied","Data":"4545551080e46f997cc29eab9f3dcb98375499a086959766efb9a7d76a859e95"} Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.979436 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7227c72c-da97-4e44-8887-7b2b26d3da8b","Type":"ContainerDied","Data":"c8233b17439a6ca9d7c9bea4752c7fe8604d3b23f066e05001947362afc610d0"} Feb 23 18:53:24 crc kubenswrapper[4768]: I0223 18:53:24.979452 4768 scope.go:117] "RemoveContainer" containerID="4545551080e46f997cc29eab9f3dcb98375499a086959766efb9a7d76a859e95" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.027385 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7227c72c-da97-4e44-8887-7b2b26d3da8b" (UID: "7227c72c-da97-4e44-8887-7b2b26d3da8b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.030721 4768 scope.go:117] "RemoveContainer" containerID="2efda0ea4037d49324c4e8fd0e189f57a41090da3700313705524880b7a59562" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.031825 4768 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.031848 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.053226 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7227c72c-da97-4e44-8887-7b2b26d3da8b" (UID: "7227c72c-da97-4e44-8887-7b2b26d3da8b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.098429 4768 scope.go:117] "RemoveContainer" containerID="4545551080e46f997cc29eab9f3dcb98375499a086959766efb9a7d76a859e95" Feb 23 18:53:25 crc kubenswrapper[4768]: E0223 18:53:25.104228 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4545551080e46f997cc29eab9f3dcb98375499a086959766efb9a7d76a859e95\": container with ID starting with 4545551080e46f997cc29eab9f3dcb98375499a086959766efb9a7d76a859e95 not found: ID does not exist" containerID="4545551080e46f997cc29eab9f3dcb98375499a086959766efb9a7d76a859e95" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.104299 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4545551080e46f997cc29eab9f3dcb98375499a086959766efb9a7d76a859e95"} err="failed to get container status \"4545551080e46f997cc29eab9f3dcb98375499a086959766efb9a7d76a859e95\": rpc error: code = NotFound desc = could not find container \"4545551080e46f997cc29eab9f3dcb98375499a086959766efb9a7d76a859e95\": container with ID starting with 4545551080e46f997cc29eab9f3dcb98375499a086959766efb9a7d76a859e95 not found: ID does not exist" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.104325 4768 scope.go:117] "RemoveContainer" containerID="2efda0ea4037d49324c4e8fd0e189f57a41090da3700313705524880b7a59562" Feb 23 18:53:25 crc kubenswrapper[4768]: E0223 18:53:25.104810 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2efda0ea4037d49324c4e8fd0e189f57a41090da3700313705524880b7a59562\": container with ID starting with 2efda0ea4037d49324c4e8fd0e189f57a41090da3700313705524880b7a59562 not found: ID does not exist" containerID="2efda0ea4037d49324c4e8fd0e189f57a41090da3700313705524880b7a59562" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.104837 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2efda0ea4037d49324c4e8fd0e189f57a41090da3700313705524880b7a59562"} err="failed to get container status \"2efda0ea4037d49324c4e8fd0e189f57a41090da3700313705524880b7a59562\": rpc error: code = NotFound desc = could not find container \"2efda0ea4037d49324c4e8fd0e189f57a41090da3700313705524880b7a59562\": container with ID starting with 2efda0ea4037d49324c4e8fd0e189f57a41090da3700313705524880b7a59562 not found: ID does not exist" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.142979 4768 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7227c72c-da97-4e44-8887-7b2b26d3da8b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.345519 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.354134 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.363184 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 23 18:53:25 crc kubenswrapper[4768]: E0223 18:53:25.363684 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7227c72c-da97-4e44-8887-7b2b26d3da8b" containerName="nova-api-log" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.363714 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7227c72c-da97-4e44-8887-7b2b26d3da8b" containerName="nova-api-log" Feb 23 18:53:25 crc kubenswrapper[4768]: E0223 18:53:25.363725 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7227c72c-da97-4e44-8887-7b2b26d3da8b" containerName="nova-api-api" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.363736 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7227c72c-da97-4e44-8887-7b2b26d3da8b" containerName="nova-api-api" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.363983 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7227c72c-da97-4e44-8887-7b2b26d3da8b" containerName="nova-api-api" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.364015 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7227c72c-da97-4e44-8887-7b2b26d3da8b" containerName="nova-api-log" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.365230 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.398466 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.398895 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.399801 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.421039 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.449974 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b973b91e-764a-461b-a4ca-50185f1f70af-public-tls-certs\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.450081 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b973b91e-764a-461b-a4ca-50185f1f70af-config-data\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.450110 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b973b91e-764a-461b-a4ca-50185f1f70af-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.450158 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv4sc\" (UniqueName: \"kubernetes.io/projected/b973b91e-764a-461b-a4ca-50185f1f70af-kube-api-access-vv4sc\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.450196 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b973b91e-764a-461b-a4ca-50185f1f70af-logs\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.450259 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b973b91e-764a-461b-a4ca-50185f1f70af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.552205 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b973b91e-764a-461b-a4ca-50185f1f70af-logs\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.552328 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b973b91e-764a-461b-a4ca-50185f1f70af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.552490 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b973b91e-764a-461b-a4ca-50185f1f70af-public-tls-certs\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.552508 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b973b91e-764a-461b-a4ca-50185f1f70af-config-data\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.553139 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b973b91e-764a-461b-a4ca-50185f1f70af-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.553215 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv4sc\" (UniqueName: \"kubernetes.io/projected/b973b91e-764a-461b-a4ca-50185f1f70af-kube-api-access-vv4sc\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.555118 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b973b91e-764a-461b-a4ca-50185f1f70af-logs\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.560434 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b973b91e-764a-461b-a4ca-50185f1f70af-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.561943 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b973b91e-764a-461b-a4ca-50185f1f70af-config-data\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.563431 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b973b91e-764a-461b-a4ca-50185f1f70af-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.574041 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b973b91e-764a-461b-a4ca-50185f1f70af-public-tls-certs\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.580807 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv4sc\" (UniqueName: \"kubernetes.io/projected/b973b91e-764a-461b-a4ca-50185f1f70af-kube-api-access-vv4sc\") pod \"nova-api-0\" (UID: \"b973b91e-764a-461b-a4ca-50185f1f70af\") " pod="openstack/nova-api-0" Feb 23 18:53:25 crc kubenswrapper[4768]: I0223 18:53:25.690941 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 23 18:53:26 crc kubenswrapper[4768]: I0223 18:53:26.205598 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 23 18:53:26 crc kubenswrapper[4768]: I0223 18:53:26.478450 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 18:53:26 crc kubenswrapper[4768]: I0223 18:53:26.478668 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 23 18:53:27 crc kubenswrapper[4768]: I0223 18:53:27.002410 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b973b91e-764a-461b-a4ca-50185f1f70af","Type":"ContainerStarted","Data":"f357e2192ee9f19e8856437e74dc46742c01a483261affd4d1279c745fa95839"} Feb 23 18:53:27 crc kubenswrapper[4768]: I0223 18:53:27.002882 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b973b91e-764a-461b-a4ca-50185f1f70af","Type":"ContainerStarted","Data":"95aff66f15995bc1b6bab1c4bf397b47375e1d3b8541a3e1f3a190ee306e0b14"} Feb 23 18:53:27 crc kubenswrapper[4768]: I0223 18:53:27.002894 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b973b91e-764a-461b-a4ca-50185f1f70af","Type":"ContainerStarted","Data":"57532c112e539b0f22fe5a080296284d20278e783bda1381cb1a312d9af488da"} Feb 23 18:53:27 crc kubenswrapper[4768]: I0223 18:53:27.034228 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.034202457 podStartE2EDuration="2.034202457s" podCreationTimestamp="2026-02-23 18:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:53:27.024731178 +0000 UTC m=+1202.415216988" watchObservedRunningTime="2026-02-23 18:53:27.034202457 +0000 UTC m=+1202.424688257" Feb 23 18:53:27 crc kubenswrapper[4768]: I0223 18:53:27.277209 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 23 18:53:27 crc kubenswrapper[4768]: I0223 18:53:27.319803 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7227c72c-da97-4e44-8887-7b2b26d3da8b" path="/var/lib/kubelet/pods/7227c72c-da97-4e44-8887-7b2b26d3da8b/volumes" Feb 23 18:53:31 crc kubenswrapper[4768]: I0223 18:53:31.478788 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 18:53:31 crc kubenswrapper[4768]: I0223 18:53:31.479396 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 23 18:53:32 crc kubenswrapper[4768]: I0223 18:53:32.277198 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 23 18:53:32 crc kubenswrapper[4768]: I0223 18:53:32.322061 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 23 18:53:32 crc kubenswrapper[4768]: I0223 18:53:32.496357 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="08feb509-1dff-446f-bdf1-47c5bc09f772" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 18:53:32 crc kubenswrapper[4768]: I0223 18:53:32.496397 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="08feb509-1dff-446f-bdf1-47c5bc09f772" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 18:53:33 crc kubenswrapper[4768]: I0223 18:53:33.106280 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 23 18:53:35 crc kubenswrapper[4768]: I0223 18:53:35.692859 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 18:53:35 crc kubenswrapper[4768]: I0223 18:53:35.693385 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 23 18:53:36 crc kubenswrapper[4768]: I0223 18:53:36.719598 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b973b91e-764a-461b-a4ca-50185f1f70af" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 23 18:53:36 crc kubenswrapper[4768]: I0223 18:53:36.719607 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b973b91e-764a-461b-a4ca-50185f1f70af" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.210:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 23 18:53:39 crc kubenswrapper[4768]: I0223 18:53:39.544680 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:53:39 crc kubenswrapper[4768]: I0223 18:53:39.544983 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:53:39 crc kubenswrapper[4768]: I0223 18:53:39.545028 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:53:39 crc kubenswrapper[4768]: I0223 18:53:39.545728 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45df64eeeccd82b6a979c0ae4c5ed47e40e22edac6d562f0aee3b3732227d91f"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:53:39 crc kubenswrapper[4768]: I0223 18:53:39.545787 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://45df64eeeccd82b6a979c0ae4c5ed47e40e22edac6d562f0aee3b3732227d91f" gracePeriod=600 Feb 23 18:53:40 crc kubenswrapper[4768]: I0223 18:53:40.160495 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="45df64eeeccd82b6a979c0ae4c5ed47e40e22edac6d562f0aee3b3732227d91f" exitCode=0 Feb 23 18:53:40 crc kubenswrapper[4768]: I0223 18:53:40.160754 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"45df64eeeccd82b6a979c0ae4c5ed47e40e22edac6d562f0aee3b3732227d91f"} Feb 23 18:53:40 crc kubenswrapper[4768]: I0223 18:53:40.161084 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"c7a03e90e8abb2a600d31f3e0012982bff2d626a93cfd11b950ec9a0d827a80c"} Feb 23 18:53:40 crc kubenswrapper[4768]: I0223 18:53:40.161115 4768 scope.go:117] "RemoveContainer" containerID="786bab7731b00b23523b13fa7e10ac65a60b043dfe0ad9d117ecf340ff5d7aa0" Feb 23 18:53:41 crc kubenswrapper[4768]: I0223 18:53:41.487454 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 18:53:41 crc kubenswrapper[4768]: I0223 18:53:41.493805 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 23 18:53:41 crc kubenswrapper[4768]: I0223 18:53:41.503220 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 18:53:42 crc kubenswrapper[4768]: I0223 18:53:42.200296 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 23 18:53:45 crc kubenswrapper[4768]: I0223 18:53:45.711145 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 18:53:45 crc kubenswrapper[4768]: I0223 18:53:45.712469 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 18:53:45 crc kubenswrapper[4768]: I0223 18:53:45.719778 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 23 18:53:45 crc kubenswrapper[4768]: I0223 18:53:45.721498 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 18:53:46 crc kubenswrapper[4768]: I0223 18:53:46.081972 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 23 18:53:46 crc kubenswrapper[4768]: I0223 18:53:46.235496 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 23 18:53:46 crc kubenswrapper[4768]: I0223 18:53:46.247625 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 23 18:53:55 crc kubenswrapper[4768]: I0223 18:53:55.962581 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 18:53:56 crc kubenswrapper[4768]: I0223 18:53:56.939084 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 18:54:01 crc kubenswrapper[4768]: I0223 18:54:01.016098 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a52ce7bc-e9a8-474d-87de-598d337bc360" containerName="rabbitmq" containerID="cri-o://dc99159db18f1bf85e1516936378fe88ec435033a46902b949c8d19a8920befb" gracePeriod=604795 Feb 23 18:54:01 crc kubenswrapper[4768]: I0223 18:54:01.558746 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2cb8a262-174b-47ef-adb6-a67384a373f1" containerName="rabbitmq" containerID="cri-o://3589f053ff7a7b6f3722b4aff6b3c84b7f65f247aa7483ef514faa599524fd65" gracePeriod=604796 Feb 23 18:54:02 crc kubenswrapper[4768]: I0223 18:54:02.857160 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a52ce7bc-e9a8-474d-87de-598d337bc360" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Feb 23 18:54:03 crc kubenswrapper[4768]: I0223 18:54:03.201786 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2cb8a262-174b-47ef-adb6-a67384a373f1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.498940 4768 generic.go:334] "Generic (PLEG): container finished" podID="a52ce7bc-e9a8-474d-87de-598d337bc360" containerID="dc99159db18f1bf85e1516936378fe88ec435033a46902b949c8d19a8920befb" exitCode=0 Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.499039 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a52ce7bc-e9a8-474d-87de-598d337bc360","Type":"ContainerDied","Data":"dc99159db18f1bf85e1516936378fe88ec435033a46902b949c8d19a8920befb"} Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.642048 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.822005 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-tls\") pod \"a52ce7bc-e9a8-474d-87de-598d337bc360\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.822106 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-server-conf\") pod \"a52ce7bc-e9a8-474d-87de-598d337bc360\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.822163 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a52ce7bc-e9a8-474d-87de-598d337bc360-pod-info\") pod \"a52ce7bc-e9a8-474d-87de-598d337bc360\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.822200 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-confd\") pod \"a52ce7bc-e9a8-474d-87de-598d337bc360\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.822260 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-plugins\") pod \"a52ce7bc-e9a8-474d-87de-598d337bc360\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.822295 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-plugins-conf\") pod \"a52ce7bc-e9a8-474d-87de-598d337bc360\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.822337 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a52ce7bc-e9a8-474d-87de-598d337bc360\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.822367 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-config-data\") pod \"a52ce7bc-e9a8-474d-87de-598d337bc360\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.822423 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a52ce7bc-e9a8-474d-87de-598d337bc360-erlang-cookie-secret\") pod \"a52ce7bc-e9a8-474d-87de-598d337bc360\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.822453 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xphq4\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-kube-api-access-xphq4\") pod \"a52ce7bc-e9a8-474d-87de-598d337bc360\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.822480 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-erlang-cookie\") pod \"a52ce7bc-e9a8-474d-87de-598d337bc360\" (UID: \"a52ce7bc-e9a8-474d-87de-598d337bc360\") " Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.823338 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a52ce7bc-e9a8-474d-87de-598d337bc360" (UID: "a52ce7bc-e9a8-474d-87de-598d337bc360"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.824889 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a52ce7bc-e9a8-474d-87de-598d337bc360" (UID: "a52ce7bc-e9a8-474d-87de-598d337bc360"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.847180 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a52ce7bc-e9a8-474d-87de-598d337bc360" (UID: "a52ce7bc-e9a8-474d-87de-598d337bc360"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.855353 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a52ce7bc-e9a8-474d-87de-598d337bc360" (UID: "a52ce7bc-e9a8-474d-87de-598d337bc360"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.858580 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "a52ce7bc-e9a8-474d-87de-598d337bc360" (UID: "a52ce7bc-e9a8-474d-87de-598d337bc360"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.858784 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a52ce7bc-e9a8-474d-87de-598d337bc360-pod-info" (OuterVolumeSpecName: "pod-info") pod "a52ce7bc-e9a8-474d-87de-598d337bc360" (UID: "a52ce7bc-e9a8-474d-87de-598d337bc360"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.874758 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-kube-api-access-xphq4" (OuterVolumeSpecName: "kube-api-access-xphq4") pod "a52ce7bc-e9a8-474d-87de-598d337bc360" (UID: "a52ce7bc-e9a8-474d-87de-598d337bc360"). InnerVolumeSpecName "kube-api-access-xphq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.901402 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52ce7bc-e9a8-474d-87de-598d337bc360-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a52ce7bc-e9a8-474d-87de-598d337bc360" (UID: "a52ce7bc-e9a8-474d-87de-598d337bc360"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.924613 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xphq4\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-kube-api-access-xphq4\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.924644 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.924653 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.924667 4768 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a52ce7bc-e9a8-474d-87de-598d337bc360-pod-info\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.924681 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.924691 4768 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.924734 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Feb 23 18:54:07 crc kubenswrapper[4768]: I0223 18:54:07.924746 4768 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a52ce7bc-e9a8-474d-87de-598d337bc360-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.007062 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.023778 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-config-data" (OuterVolumeSpecName: "config-data") pod "a52ce7bc-e9a8-474d-87de-598d337bc360" (UID: "a52ce7bc-e9a8-474d-87de-598d337bc360"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.026423 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.026458 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.045606 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-server-conf" (OuterVolumeSpecName: "server-conf") pod "a52ce7bc-e9a8-474d-87de-598d337bc360" (UID: "a52ce7bc-e9a8-474d-87de-598d337bc360"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.059351 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a52ce7bc-e9a8-474d-87de-598d337bc360" (UID: "a52ce7bc-e9a8-474d-87de-598d337bc360"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.134927 4768 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a52ce7bc-e9a8-474d-87de-598d337bc360-server-conf\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.134984 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a52ce7bc-e9a8-474d-87de-598d337bc360-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.254294 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.338371 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-tls\") pod \"2cb8a262-174b-47ef-adb6-a67384a373f1\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.338446 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2cb8a262-174b-47ef-adb6-a67384a373f1-pod-info\") pod \"2cb8a262-174b-47ef-adb6-a67384a373f1\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.338472 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n9cp\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-kube-api-access-4n9cp\") pod \"2cb8a262-174b-47ef-adb6-a67384a373f1\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.338509 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-confd\") pod \"2cb8a262-174b-47ef-adb6-a67384a373f1\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.338577 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-plugins-conf\") pod \"2cb8a262-174b-47ef-adb6-a67384a373f1\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.338604 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-erlang-cookie\") pod \"2cb8a262-174b-47ef-adb6-a67384a373f1\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.338666 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-server-conf\") pod \"2cb8a262-174b-47ef-adb6-a67384a373f1\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.338748 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-config-data\") pod \"2cb8a262-174b-47ef-adb6-a67384a373f1\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.338793 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-plugins\") pod \"2cb8a262-174b-47ef-adb6-a67384a373f1\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.338836 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"2cb8a262-174b-47ef-adb6-a67384a373f1\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.338882 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2cb8a262-174b-47ef-adb6-a67384a373f1-erlang-cookie-secret\") pod \"2cb8a262-174b-47ef-adb6-a67384a373f1\" (UID: \"2cb8a262-174b-47ef-adb6-a67384a373f1\") " Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.339978 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2cb8a262-174b-47ef-adb6-a67384a373f1" (UID: "2cb8a262-174b-47ef-adb6-a67384a373f1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.340454 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2cb8a262-174b-47ef-adb6-a67384a373f1" (UID: "2cb8a262-174b-47ef-adb6-a67384a373f1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.342466 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2cb8a262-174b-47ef-adb6-a67384a373f1" (UID: "2cb8a262-174b-47ef-adb6-a67384a373f1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.368437 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "2cb8a262-174b-47ef-adb6-a67384a373f1" (UID: "2cb8a262-174b-47ef-adb6-a67384a373f1"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.370070 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb8a262-174b-47ef-adb6-a67384a373f1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2cb8a262-174b-47ef-adb6-a67384a373f1" (UID: "2cb8a262-174b-47ef-adb6-a67384a373f1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.371871 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2cb8a262-174b-47ef-adb6-a67384a373f1-pod-info" (OuterVolumeSpecName: "pod-info") pod "2cb8a262-174b-47ef-adb6-a67384a373f1" (UID: "2cb8a262-174b-47ef-adb6-a67384a373f1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.376810 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-kube-api-access-4n9cp" (OuterVolumeSpecName: "kube-api-access-4n9cp") pod "2cb8a262-174b-47ef-adb6-a67384a373f1" (UID: "2cb8a262-174b-47ef-adb6-a67384a373f1"). InnerVolumeSpecName "kube-api-access-4n9cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.378438 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2cb8a262-174b-47ef-adb6-a67384a373f1" (UID: "2cb8a262-174b-47ef-adb6-a67384a373f1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.391999 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-config-data" (OuterVolumeSpecName: "config-data") pod "2cb8a262-174b-47ef-adb6-a67384a373f1" (UID: "2cb8a262-174b-47ef-adb6-a67384a373f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.441314 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.441348 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.441367 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.441376 4768 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2cb8a262-174b-47ef-adb6-a67384a373f1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.441385 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.441395 4768 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2cb8a262-174b-47ef-adb6-a67384a373f1-pod-info\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.441406 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n9cp\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-kube-api-access-4n9cp\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.441414 4768 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.441423 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.449355 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-server-conf" (OuterVolumeSpecName: "server-conf") pod "2cb8a262-174b-47ef-adb6-a67384a373f1" (UID: "2cb8a262-174b-47ef-adb6-a67384a373f1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.466799 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.502217 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2cb8a262-174b-47ef-adb6-a67384a373f1" (UID: "2cb8a262-174b-47ef-adb6-a67384a373f1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.519115 4768 generic.go:334] "Generic (PLEG): container finished" podID="2cb8a262-174b-47ef-adb6-a67384a373f1" containerID="3589f053ff7a7b6f3722b4aff6b3c84b7f65f247aa7483ef514faa599524fd65" exitCode=0 Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.519169 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2cb8a262-174b-47ef-adb6-a67384a373f1","Type":"ContainerDied","Data":"3589f053ff7a7b6f3722b4aff6b3c84b7f65f247aa7483ef514faa599524fd65"} Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.519203 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.519304 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2cb8a262-174b-47ef-adb6-a67384a373f1","Type":"ContainerDied","Data":"26b9757c2a1c332ff3224f10ffc00eb3153a22859a91180c0077bb7c607fa3ab"} Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.519329 4768 scope.go:117] "RemoveContainer" containerID="3589f053ff7a7b6f3722b4aff6b3c84b7f65f247aa7483ef514faa599524fd65" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.523079 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a52ce7bc-e9a8-474d-87de-598d337bc360","Type":"ContainerDied","Data":"ef933112982f1aebaf01c0fcf8723602b73a21e08ecbb42b5c90270093b5f808"} Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.523201 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.549551 4768 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2cb8a262-174b-47ef-adb6-a67384a373f1-server-conf\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.549578 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.549587 4768 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2cb8a262-174b-47ef-adb6-a67384a373f1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.553710 4768 scope.go:117] "RemoveContainer" containerID="173f567940bc9e7d12562c8311b0e6fa736edc2d7fb93fa710f6fe229b843f1c" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.562162 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.580899 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.593546 4768 scope.go:117] "RemoveContainer" containerID="3589f053ff7a7b6f3722b4aff6b3c84b7f65f247aa7483ef514faa599524fd65" Feb 23 18:54:08 crc kubenswrapper[4768]: E0223 18:54:08.594725 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3589f053ff7a7b6f3722b4aff6b3c84b7f65f247aa7483ef514faa599524fd65\": container with ID starting with 3589f053ff7a7b6f3722b4aff6b3c84b7f65f247aa7483ef514faa599524fd65 not found: ID does not exist" containerID="3589f053ff7a7b6f3722b4aff6b3c84b7f65f247aa7483ef514faa599524fd65" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.594763 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3589f053ff7a7b6f3722b4aff6b3c84b7f65f247aa7483ef514faa599524fd65"} err="failed to get container status \"3589f053ff7a7b6f3722b4aff6b3c84b7f65f247aa7483ef514faa599524fd65\": rpc error: code = NotFound desc = could not find container \"3589f053ff7a7b6f3722b4aff6b3c84b7f65f247aa7483ef514faa599524fd65\": container with ID starting with 3589f053ff7a7b6f3722b4aff6b3c84b7f65f247aa7483ef514faa599524fd65 not found: ID does not exist" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.594790 4768 scope.go:117] "RemoveContainer" containerID="173f567940bc9e7d12562c8311b0e6fa736edc2d7fb93fa710f6fe229b843f1c" Feb 23 18:54:08 crc kubenswrapper[4768]: E0223 18:54:08.595049 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"173f567940bc9e7d12562c8311b0e6fa736edc2d7fb93fa710f6fe229b843f1c\": container with ID starting with 173f567940bc9e7d12562c8311b0e6fa736edc2d7fb93fa710f6fe229b843f1c not found: ID does not exist" containerID="173f567940bc9e7d12562c8311b0e6fa736edc2d7fb93fa710f6fe229b843f1c" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.595071 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"173f567940bc9e7d12562c8311b0e6fa736edc2d7fb93fa710f6fe229b843f1c"} err="failed to get container status \"173f567940bc9e7d12562c8311b0e6fa736edc2d7fb93fa710f6fe229b843f1c\": rpc error: code = NotFound desc = could not find container \"173f567940bc9e7d12562c8311b0e6fa736edc2d7fb93fa710f6fe229b843f1c\": container with ID starting with 173f567940bc9e7d12562c8311b0e6fa736edc2d7fb93fa710f6fe229b843f1c not found: ID does not exist" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.595085 4768 scope.go:117] "RemoveContainer" containerID="dc99159db18f1bf85e1516936378fe88ec435033a46902b949c8d19a8920befb" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.607377 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.626419 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.634539 4768 scope.go:117] "RemoveContainer" containerID="8950944ed237e1903bb4e956e9e9496fa8c259943744c2c4afe591a90782d9cf" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.638516 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 18:54:08 crc kubenswrapper[4768]: E0223 18:54:08.639039 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb8a262-174b-47ef-adb6-a67384a373f1" containerName="setup-container" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.639065 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb8a262-174b-47ef-adb6-a67384a373f1" containerName="setup-container" Feb 23 18:54:08 crc kubenswrapper[4768]: E0223 18:54:08.639089 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb8a262-174b-47ef-adb6-a67384a373f1" containerName="rabbitmq" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.639099 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb8a262-174b-47ef-adb6-a67384a373f1" containerName="rabbitmq" Feb 23 18:54:08 crc kubenswrapper[4768]: E0223 18:54:08.639126 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52ce7bc-e9a8-474d-87de-598d337bc360" containerName="rabbitmq" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.639134 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52ce7bc-e9a8-474d-87de-598d337bc360" containerName="rabbitmq" Feb 23 18:54:08 crc kubenswrapper[4768]: E0223 18:54:08.639155 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52ce7bc-e9a8-474d-87de-598d337bc360" containerName="setup-container" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.639163 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52ce7bc-e9a8-474d-87de-598d337bc360" containerName="setup-container" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.639416 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52ce7bc-e9a8-474d-87de-598d337bc360" containerName="rabbitmq" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.639443 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb8a262-174b-47ef-adb6-a67384a373f1" containerName="rabbitmq" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.640603 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.642729 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.643026 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.643194 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.643380 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.643718 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.647230 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-dfxbv" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.647392 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.650705 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.666827 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.668428 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.672208 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-bglfm" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.672484 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.672973 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.673264 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.673370 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.673465 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.681695 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.686636 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.752759 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-config-data\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.752826 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.752859 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.752900 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.752935 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.752984 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753017 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753055 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753089 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753121 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753148 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753179 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753210 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwlnr\" (UniqueName: \"kubernetes.io/projected/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-kube-api-access-nwlnr\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753588 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753724 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753752 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753789 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753867 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753893 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.753935 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.754025 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhwc9\" (UniqueName: \"kubernetes.io/projected/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-kube-api-access-mhwc9\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.754214 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856070 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856114 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856144 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856165 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856187 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856207 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856227 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856263 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwlnr\" (UniqueName: \"kubernetes.io/projected/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-kube-api-access-nwlnr\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856299 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856325 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856346 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856365 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856386 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856418 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856441 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856466 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhwc9\" (UniqueName: \"kubernetes.io/projected/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-kube-api-access-mhwc9\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856491 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856527 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-config-data\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856544 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856586 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856604 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.856629 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.857281 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.857745 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.857761 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.857927 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.858697 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.858973 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.858993 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.859239 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.859352 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.859686 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.859837 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-config-data\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.859919 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.864842 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.872848 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.880629 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.881260 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.881780 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.882101 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhwc9\" (UniqueName: \"kubernetes.io/projected/3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc-kube-api-access-mhwc9\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.882901 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwlnr\" (UniqueName: \"kubernetes.io/projected/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-kube-api-access-nwlnr\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.884421 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.888543 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-nm8tp"] Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.888556 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.890469 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.894002 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8cb5a51-f628-42ca-9f9a-002d2f2f3b00-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.899638 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.921445 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc\") " pod="openstack/rabbitmq-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.922685 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-nm8tp"] Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.928903 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00\") " pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.960545 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.960848 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.961084 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.961175 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-config\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.961216 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz76c\" (UniqueName: \"kubernetes.io/projected/528c732a-1fab-4546-b27d-e825547020ff-kube-api-access-fz76c\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.961406 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.961464 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.965939 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:08 crc kubenswrapper[4768]: I0223 18:54:08.995138 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.063757 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.063861 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.063909 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-config\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.063939 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz76c\" (UniqueName: \"kubernetes.io/projected/528c732a-1fab-4546-b27d-e825547020ff-kube-api-access-fz76c\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.063992 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.064026 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.064083 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.065389 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.066109 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.066483 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.066855 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.066862 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.067492 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-config\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.075531 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-nm8tp"] Feb 23 18:54:09 crc kubenswrapper[4768]: E0223 18:54:09.076730 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-fz76c], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" podUID="528c732a-1fab-4546-b27d-e825547020ff" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.083547 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz76c\" (UniqueName: \"kubernetes.io/projected/528c732a-1fab-4546-b27d-e825547020ff-kube-api-access-fz76c\") pod \"dnsmasq-dns-79bd4cc8c9-nm8tp\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.096713 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-9b4p5"] Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.098619 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.111095 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-9b4p5"] Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.168472 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-dns-svc\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.168926 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.168975 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.169000 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6txh6\" (UniqueName: \"kubernetes.io/projected/cae4398a-0817-4c3e-8449-9082d6d21b59-kube-api-access-6txh6\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.169049 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.169082 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-config\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.169105 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.270991 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-config\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.271068 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.271145 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-dns-svc\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.271184 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.271266 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.271302 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6txh6\" (UniqueName: \"kubernetes.io/projected/cae4398a-0817-4c3e-8449-9082d6d21b59-kube-api-access-6txh6\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.271364 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.272555 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-dns-svc\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.272587 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.273195 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-config\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.273358 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.274038 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.274071 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cae4398a-0817-4c3e-8449-9082d6d21b59-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.295933 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6txh6\" (UniqueName: \"kubernetes.io/projected/cae4398a-0817-4c3e-8449-9082d6d21b59-kube-api-access-6txh6\") pod \"dnsmasq-dns-55478c4467-9b4p5\" (UID: \"cae4398a-0817-4c3e-8449-9082d6d21b59\") " pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.321706 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb8a262-174b-47ef-adb6-a67384a373f1" path="/var/lib/kubelet/pods/2cb8a262-174b-47ef-adb6-a67384a373f1/volumes" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.322605 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52ce7bc-e9a8-474d-87de-598d337bc360" path="/var/lib/kubelet/pods/a52ce7bc-e9a8-474d-87de-598d337bc360/volumes" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.419031 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.587371 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.606751 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.625469 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 23 18:54:09 crc kubenswrapper[4768]: W0223 18:54:09.642854 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d3a3cc7_cb0f_40c3_b54c_86517ddf3efc.slice/crio-d09a8ee26e77d4424ef464bee2984d223b9d817fa71b0098fd64b641b10fd596 WatchSource:0}: Error finding container d09a8ee26e77d4424ef464bee2984d223b9d817fa71b0098fd64b641b10fd596: Status 404 returned error can't find the container with id d09a8ee26e77d4424ef464bee2984d223b9d817fa71b0098fd64b641b10fd596 Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.645214 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.678717 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-dns-swift-storage-0\") pod \"528c732a-1fab-4546-b27d-e825547020ff\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.678841 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-ovsdbserver-sb\") pod \"528c732a-1fab-4546-b27d-e825547020ff\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.678883 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-openstack-edpm-ipam\") pod \"528c732a-1fab-4546-b27d-e825547020ff\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.679069 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-dns-svc\") pod \"528c732a-1fab-4546-b27d-e825547020ff\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.679188 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-config\") pod \"528c732a-1fab-4546-b27d-e825547020ff\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.679263 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-ovsdbserver-nb\") pod \"528c732a-1fab-4546-b27d-e825547020ff\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.679282 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz76c\" (UniqueName: \"kubernetes.io/projected/528c732a-1fab-4546-b27d-e825547020ff-kube-api-access-fz76c\") pod \"528c732a-1fab-4546-b27d-e825547020ff\" (UID: \"528c732a-1fab-4546-b27d-e825547020ff\") " Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.680861 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-config" (OuterVolumeSpecName: "config") pod "528c732a-1fab-4546-b27d-e825547020ff" (UID: "528c732a-1fab-4546-b27d-e825547020ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.680985 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "528c732a-1fab-4546-b27d-e825547020ff" (UID: "528c732a-1fab-4546-b27d-e825547020ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.681268 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "528c732a-1fab-4546-b27d-e825547020ff" (UID: "528c732a-1fab-4546-b27d-e825547020ff"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.682156 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "528c732a-1fab-4546-b27d-e825547020ff" (UID: "528c732a-1fab-4546-b27d-e825547020ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.682626 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "528c732a-1fab-4546-b27d-e825547020ff" (UID: "528c732a-1fab-4546-b27d-e825547020ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.683346 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "528c732a-1fab-4546-b27d-e825547020ff" (UID: "528c732a-1fab-4546-b27d-e825547020ff"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.683433 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/528c732a-1fab-4546-b27d-e825547020ff-kube-api-access-fz76c" (OuterVolumeSpecName: "kube-api-access-fz76c") pod "528c732a-1fab-4546-b27d-e825547020ff" (UID: "528c732a-1fab-4546-b27d-e825547020ff"). InnerVolumeSpecName "kube-api-access-fz76c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.781816 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.781860 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.781872 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.781882 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.781891 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.781901 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz76c\" (UniqueName: \"kubernetes.io/projected/528c732a-1fab-4546-b27d-e825547020ff-kube-api-access-fz76c\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.781911 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/528c732a-1fab-4546-b27d-e825547020ff-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:09 crc kubenswrapper[4768]: I0223 18:54:09.956184 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-9b4p5"] Feb 23 18:54:09 crc kubenswrapper[4768]: W0223 18:54:09.969072 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcae4398a_0817_4c3e_8449_9082d6d21b59.slice/crio-8b56bb787eb8475fc8dd066bec679034dea8e7132573588e6f637de06a0ac954 WatchSource:0}: Error finding container 8b56bb787eb8475fc8dd066bec679034dea8e7132573588e6f637de06a0ac954: Status 404 returned error can't find the container with id 8b56bb787eb8475fc8dd066bec679034dea8e7132573588e6f637de06a0ac954 Feb 23 18:54:10 crc kubenswrapper[4768]: I0223 18:54:10.600970 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00","Type":"ContainerStarted","Data":"e99bc35705d4c0e807dbc66f66bc4033ae94a3ac6313d702e205580a6e194c60"} Feb 23 18:54:10 crc kubenswrapper[4768]: I0223 18:54:10.602984 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc","Type":"ContainerStarted","Data":"d09a8ee26e77d4424ef464bee2984d223b9d817fa71b0098fd64b641b10fd596"} Feb 23 18:54:10 crc kubenswrapper[4768]: I0223 18:54:10.604931 4768 generic.go:334] "Generic (PLEG): container finished" podID="cae4398a-0817-4c3e-8449-9082d6d21b59" containerID="467667c595147da9d394271164f36e4b85611042eb20d0f20457a04694e136c5" exitCode=0 Feb 23 18:54:10 crc kubenswrapper[4768]: I0223 18:54:10.604996 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:10 crc kubenswrapper[4768]: I0223 18:54:10.607359 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-9b4p5" event={"ID":"cae4398a-0817-4c3e-8449-9082d6d21b59","Type":"ContainerDied","Data":"467667c595147da9d394271164f36e4b85611042eb20d0f20457a04694e136c5"} Feb 23 18:54:10 crc kubenswrapper[4768]: I0223 18:54:10.607404 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-9b4p5" event={"ID":"cae4398a-0817-4c3e-8449-9082d6d21b59","Type":"ContainerStarted","Data":"8b56bb787eb8475fc8dd066bec679034dea8e7132573588e6f637de06a0ac954"} Feb 23 18:54:11 crc kubenswrapper[4768]: I0223 18:54:11.621505 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc","Type":"ContainerStarted","Data":"666be9e6ff7e4b090b4775eb91950852d024c5e18ac182866c8a4bad9a00634c"} Feb 23 18:54:12 crc kubenswrapper[4768]: I0223 18:54:12.639071 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-9b4p5" event={"ID":"cae4398a-0817-4c3e-8449-9082d6d21b59","Type":"ContainerStarted","Data":"c0b016e8351b1ebd48b7156d18983d29619b9ef34f6b7b9b0acc05e89a2db829"} Feb 23 18:54:12 crc kubenswrapper[4768]: I0223 18:54:12.639550 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:12 crc kubenswrapper[4768]: I0223 18:54:12.642499 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00","Type":"ContainerStarted","Data":"8b2f9d26ee9ec4307f696e29bb39ae21dcdfc4167399759003d0af28368f340e"} Feb 23 18:54:12 crc kubenswrapper[4768]: I0223 18:54:12.672194 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-9b4p5" podStartSLOduration=3.672173076 podStartE2EDuration="3.672173076s" podCreationTimestamp="2026-02-23 18:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:54:12.662728927 +0000 UTC m=+1248.053214757" watchObservedRunningTime="2026-02-23 18:54:12.672173076 +0000 UTC m=+1248.062658876" Feb 23 18:54:19 crc kubenswrapper[4768]: I0223 18:54:19.420618 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-9b4p5" Feb 23 18:54:19 crc kubenswrapper[4768]: I0223 18:54:19.530724 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6mm5g"] Feb 23 18:54:19 crc kubenswrapper[4768]: I0223 18:54:19.531037 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" podUID="841d70ea-a129-448e-bf61-2e13c1b19a96" containerName="dnsmasq-dns" containerID="cri-o://43cc8e6f4a80fe250c65ecc08ae8b76f36da949216b63c4563dbd007cb979888" gracePeriod=10 Feb 23 18:54:19 crc kubenswrapper[4768]: I0223 18:54:19.734196 4768 generic.go:334] "Generic (PLEG): container finished" podID="841d70ea-a129-448e-bf61-2e13c1b19a96" containerID="43cc8e6f4a80fe250c65ecc08ae8b76f36da949216b63c4563dbd007cb979888" exitCode=0 Feb 23 18:54:19 crc kubenswrapper[4768]: I0223 18:54:19.734238 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" event={"ID":"841d70ea-a129-448e-bf61-2e13c1b19a96","Type":"ContainerDied","Data":"43cc8e6f4a80fe250c65ecc08ae8b76f36da949216b63c4563dbd007cb979888"} Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.155626 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.230838 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-ovsdbserver-sb\") pod \"841d70ea-a129-448e-bf61-2e13c1b19a96\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.230993 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-dns-swift-storage-0\") pod \"841d70ea-a129-448e-bf61-2e13c1b19a96\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.231334 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-config\") pod \"841d70ea-a129-448e-bf61-2e13c1b19a96\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.231414 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-dns-svc\") pod \"841d70ea-a129-448e-bf61-2e13c1b19a96\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.231535 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdcdh\" (UniqueName: \"kubernetes.io/projected/841d70ea-a129-448e-bf61-2e13c1b19a96-kube-api-access-vdcdh\") pod \"841d70ea-a129-448e-bf61-2e13c1b19a96\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.232819 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-ovsdbserver-nb\") pod \"841d70ea-a129-448e-bf61-2e13c1b19a96\" (UID: \"841d70ea-a129-448e-bf61-2e13c1b19a96\") " Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.238758 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/841d70ea-a129-448e-bf61-2e13c1b19a96-kube-api-access-vdcdh" (OuterVolumeSpecName: "kube-api-access-vdcdh") pod "841d70ea-a129-448e-bf61-2e13c1b19a96" (UID: "841d70ea-a129-448e-bf61-2e13c1b19a96"). InnerVolumeSpecName "kube-api-access-vdcdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.279874 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "841d70ea-a129-448e-bf61-2e13c1b19a96" (UID: "841d70ea-a129-448e-bf61-2e13c1b19a96"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.282698 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "841d70ea-a129-448e-bf61-2e13c1b19a96" (UID: "841d70ea-a129-448e-bf61-2e13c1b19a96"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.288151 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-config" (OuterVolumeSpecName: "config") pod "841d70ea-a129-448e-bf61-2e13c1b19a96" (UID: "841d70ea-a129-448e-bf61-2e13c1b19a96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.289476 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "841d70ea-a129-448e-bf61-2e13c1b19a96" (UID: "841d70ea-a129-448e-bf61-2e13c1b19a96"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.308912 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "841d70ea-a129-448e-bf61-2e13c1b19a96" (UID: "841d70ea-a129-448e-bf61-2e13c1b19a96"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.335687 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.335710 4768 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.335719 4768 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.335730 4768 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-config\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.335739 4768 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/841d70ea-a129-448e-bf61-2e13c1b19a96-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.335748 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdcdh\" (UniqueName: \"kubernetes.io/projected/841d70ea-a129-448e-bf61-2e13c1b19a96-kube-api-access-vdcdh\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.747359 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" event={"ID":"841d70ea-a129-448e-bf61-2e13c1b19a96","Type":"ContainerDied","Data":"1dd2f723b2a1c3b37e978ee9dbdfcb9e287f386e797c64fbd6aee744e9ae2f53"} Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.747743 4768 scope.go:117] "RemoveContainer" containerID="43cc8e6f4a80fe250c65ecc08ae8b76f36da949216b63c4563dbd007cb979888" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.747455 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.779740 4768 scope.go:117] "RemoveContainer" containerID="2a77ea6cbb27eb651f637e3c17c626ce6456d44fc252c222b3890ad6b5cad60e" Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.787852 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6mm5g"] Feb 23 18:54:20 crc kubenswrapper[4768]: I0223 18:54:20.800659 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6mm5g"] Feb 23 18:54:21 crc kubenswrapper[4768]: I0223 18:54:21.321164 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="841d70ea-a129-448e-bf61-2e13c1b19a96" path="/var/lib/kubelet/pods/841d70ea-a129-448e-bf61-2e13c1b19a96/volumes" Feb 23 18:54:25 crc kubenswrapper[4768]: I0223 18:54:25.051027 4768 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-89c5cd4d5-6mm5g" podUID="841d70ea-a129-448e-bf61-2e13c1b19a96" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: i/o timeout" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.035467 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw"] Feb 23 18:54:28 crc kubenswrapper[4768]: E0223 18:54:28.037018 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="841d70ea-a129-448e-bf61-2e13c1b19a96" containerName="init" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.037038 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="841d70ea-a129-448e-bf61-2e13c1b19a96" containerName="init" Feb 23 18:54:28 crc kubenswrapper[4768]: E0223 18:54:28.037067 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="841d70ea-a129-448e-bf61-2e13c1b19a96" containerName="dnsmasq-dns" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.037074 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="841d70ea-a129-448e-bf61-2e13c1b19a96" containerName="dnsmasq-dns" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.037374 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="841d70ea-a129-448e-bf61-2e13c1b19a96" containerName="dnsmasq-dns" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.038197 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.042275 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.042994 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.043967 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.045965 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.053123 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw"] Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.135706 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzm58\" (UniqueName: \"kubernetes.io/projected/a7d9a362-95f1-4326-99a7-121ec8a4816f-kube-api-access-fzm58\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.136041 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.136550 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.136689 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.238824 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.238915 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.238981 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzm58\" (UniqueName: \"kubernetes.io/projected/a7d9a362-95f1-4326-99a7-121ec8a4816f-kube-api-access-fzm58\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.239099 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.247648 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.249745 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.250645 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.276706 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzm58\" (UniqueName: \"kubernetes.io/projected/a7d9a362-95f1-4326-99a7-121ec8a4816f-kube-api-access-fzm58\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.361076 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.966995 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw"] Feb 23 18:54:28 crc kubenswrapper[4768]: W0223 18:54:28.981199 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7d9a362_95f1_4326_99a7_121ec8a4816f.slice/crio-fcc6715664855f98d8337b1b273e9b81fb0085245b55b2b694c37fb0ee7e9efc WatchSource:0}: Error finding container fcc6715664855f98d8337b1b273e9b81fb0085245b55b2b694c37fb0ee7e9efc: Status 404 returned error can't find the container with id fcc6715664855f98d8337b1b273e9b81fb0085245b55b2b694c37fb0ee7e9efc Feb 23 18:54:28 crc kubenswrapper[4768]: I0223 18:54:28.986422 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:54:29 crc kubenswrapper[4768]: I0223 18:54:29.875415 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" event={"ID":"a7d9a362-95f1-4326-99a7-121ec8a4816f","Type":"ContainerStarted","Data":"fcc6715664855f98d8337b1b273e9b81fb0085245b55b2b694c37fb0ee7e9efc"} Feb 23 18:54:38 crc kubenswrapper[4768]: I0223 18:54:38.971561 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" event={"ID":"a7d9a362-95f1-4326-99a7-121ec8a4816f","Type":"ContainerStarted","Data":"c20fefa725b70e36f108a4da299627625861fee82c5f5b0f6aa723b783958574"} Feb 23 18:54:39 crc kubenswrapper[4768]: I0223 18:54:39.001729 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" podStartSLOduration=1.978780497 podStartE2EDuration="11.001699288s" podCreationTimestamp="2026-02-23 18:54:28 +0000 UTC" firstStartedPulling="2026-02-23 18:54:28.986124591 +0000 UTC m=+1264.376610401" lastFinishedPulling="2026-02-23 18:54:38.009043352 +0000 UTC m=+1273.399529192" observedRunningTime="2026-02-23 18:54:38.990543253 +0000 UTC m=+1274.381029093" watchObservedRunningTime="2026-02-23 18:54:39.001699288 +0000 UTC m=+1274.392185098" Feb 23 18:54:40 crc kubenswrapper[4768]: I0223 18:54:40.635940 4768 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod528c732a-1fab-4546-b27d-e825547020ff"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod528c732a-1fab-4546-b27d-e825547020ff] : Timed out while waiting for systemd to remove kubepods-besteffort-pod528c732a_1fab_4546_b27d_e825547020ff.slice" Feb 23 18:54:40 crc kubenswrapper[4768]: E0223 18:54:40.636545 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod528c732a-1fab-4546-b27d-e825547020ff] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod528c732a-1fab-4546-b27d-e825547020ff] : Timed out while waiting for systemd to remove kubepods-besteffort-pod528c732a_1fab_4546_b27d_e825547020ff.slice" pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" podUID="528c732a-1fab-4546-b27d-e825547020ff" Feb 23 18:54:40 crc kubenswrapper[4768]: I0223 18:54:40.996552 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-nm8tp" Feb 23 18:54:41 crc kubenswrapper[4768]: I0223 18:54:41.074996 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-nm8tp"] Feb 23 18:54:41 crc kubenswrapper[4768]: I0223 18:54:41.087108 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-nm8tp"] Feb 23 18:54:41 crc kubenswrapper[4768]: I0223 18:54:41.318003 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="528c732a-1fab-4546-b27d-e825547020ff" path="/var/lib/kubelet/pods/528c732a-1fab-4546-b27d-e825547020ff/volumes" Feb 23 18:54:45 crc kubenswrapper[4768]: I0223 18:54:45.054932 4768 generic.go:334] "Generic (PLEG): container finished" podID="3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc" containerID="666be9e6ff7e4b090b4775eb91950852d024c5e18ac182866c8a4bad9a00634c" exitCode=0 Feb 23 18:54:45 crc kubenswrapper[4768]: I0223 18:54:45.055068 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc","Type":"ContainerDied","Data":"666be9e6ff7e4b090b4775eb91950852d024c5e18ac182866c8a4bad9a00634c"} Feb 23 18:54:45 crc kubenswrapper[4768]: I0223 18:54:45.060671 4768 generic.go:334] "Generic (PLEG): container finished" podID="b8cb5a51-f628-42ca-9f9a-002d2f2f3b00" containerID="8b2f9d26ee9ec4307f696e29bb39ae21dcdfc4167399759003d0af28368f340e" exitCode=0 Feb 23 18:54:45 crc kubenswrapper[4768]: I0223 18:54:45.060740 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00","Type":"ContainerDied","Data":"8b2f9d26ee9ec4307f696e29bb39ae21dcdfc4167399759003d0af28368f340e"} Feb 23 18:54:46 crc kubenswrapper[4768]: I0223 18:54:46.082856 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8cb5a51-f628-42ca-9f9a-002d2f2f3b00","Type":"ContainerStarted","Data":"ec9c3b46a077256a0fe6a9c401403d342555dde081ea83e4c3fcb3f1fa679f3a"} Feb 23 18:54:46 crc kubenswrapper[4768]: I0223 18:54:46.083671 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:46 crc kubenswrapper[4768]: I0223 18:54:46.087723 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc","Type":"ContainerStarted","Data":"f1046f8f061b6d67f64924c2895f369934e7309dc67d834c99b4833cc1f0aa18"} Feb 23 18:54:46 crc kubenswrapper[4768]: I0223 18:54:46.088033 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 23 18:54:46 crc kubenswrapper[4768]: I0223 18:54:46.134028 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.134005098 podStartE2EDuration="38.134005098s" podCreationTimestamp="2026-02-23 18:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:54:46.112827328 +0000 UTC m=+1281.503313148" watchObservedRunningTime="2026-02-23 18:54:46.134005098 +0000 UTC m=+1281.524490908" Feb 23 18:54:46 crc kubenswrapper[4768]: I0223 18:54:46.150072 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.150052197 podStartE2EDuration="38.150052197s" podCreationTimestamp="2026-02-23 18:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 18:54:46.146753877 +0000 UTC m=+1281.537239697" watchObservedRunningTime="2026-02-23 18:54:46.150052197 +0000 UTC m=+1281.540537997" Feb 23 18:54:49 crc kubenswrapper[4768]: I0223 18:54:49.122630 4768 generic.go:334] "Generic (PLEG): container finished" podID="a7d9a362-95f1-4326-99a7-121ec8a4816f" containerID="c20fefa725b70e36f108a4da299627625861fee82c5f5b0f6aa723b783958574" exitCode=0 Feb 23 18:54:49 crc kubenswrapper[4768]: I0223 18:54:49.122729 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" event={"ID":"a7d9a362-95f1-4326-99a7-121ec8a4816f","Type":"ContainerDied","Data":"c20fefa725b70e36f108a4da299627625861fee82c5f5b0f6aa723b783958574"} Feb 23 18:54:50 crc kubenswrapper[4768]: I0223 18:54:50.585906 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:50 crc kubenswrapper[4768]: I0223 18:54:50.760871 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-repo-setup-combined-ca-bundle\") pod \"a7d9a362-95f1-4326-99a7-121ec8a4816f\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " Feb 23 18:54:50 crc kubenswrapper[4768]: I0223 18:54:50.761055 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzm58\" (UniqueName: \"kubernetes.io/projected/a7d9a362-95f1-4326-99a7-121ec8a4816f-kube-api-access-fzm58\") pod \"a7d9a362-95f1-4326-99a7-121ec8a4816f\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " Feb 23 18:54:50 crc kubenswrapper[4768]: I0223 18:54:50.761215 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-inventory\") pod \"a7d9a362-95f1-4326-99a7-121ec8a4816f\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " Feb 23 18:54:50 crc kubenswrapper[4768]: I0223 18:54:50.761449 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-ssh-key-openstack-edpm-ipam\") pod \"a7d9a362-95f1-4326-99a7-121ec8a4816f\" (UID: \"a7d9a362-95f1-4326-99a7-121ec8a4816f\") " Feb 23 18:54:50 crc kubenswrapper[4768]: I0223 18:54:50.770537 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7d9a362-95f1-4326-99a7-121ec8a4816f-kube-api-access-fzm58" (OuterVolumeSpecName: "kube-api-access-fzm58") pod "a7d9a362-95f1-4326-99a7-121ec8a4816f" (UID: "a7d9a362-95f1-4326-99a7-121ec8a4816f"). InnerVolumeSpecName "kube-api-access-fzm58". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:54:50 crc kubenswrapper[4768]: I0223 18:54:50.772099 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a7d9a362-95f1-4326-99a7-121ec8a4816f" (UID: "a7d9a362-95f1-4326-99a7-121ec8a4816f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:54:50 crc kubenswrapper[4768]: I0223 18:54:50.798864 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-inventory" (OuterVolumeSpecName: "inventory") pod "a7d9a362-95f1-4326-99a7-121ec8a4816f" (UID: "a7d9a362-95f1-4326-99a7-121ec8a4816f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:54:50 crc kubenswrapper[4768]: I0223 18:54:50.813495 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a7d9a362-95f1-4326-99a7-121ec8a4816f" (UID: "a7d9a362-95f1-4326-99a7-121ec8a4816f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:54:50 crc kubenswrapper[4768]: I0223 18:54:50.864402 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:50 crc kubenswrapper[4768]: I0223 18:54:50.864456 4768 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:50 crc kubenswrapper[4768]: I0223 18:54:50.864473 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzm58\" (UniqueName: \"kubernetes.io/projected/a7d9a362-95f1-4326-99a7-121ec8a4816f-kube-api-access-fzm58\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:50 crc kubenswrapper[4768]: I0223 18:54:50.864497 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a7d9a362-95f1-4326-99a7-121ec8a4816f-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.155676 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" event={"ID":"a7d9a362-95f1-4326-99a7-121ec8a4816f","Type":"ContainerDied","Data":"fcc6715664855f98d8337b1b273e9b81fb0085245b55b2b694c37fb0ee7e9efc"} Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.156112 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc6715664855f98d8337b1b273e9b81fb0085245b55b2b694c37fb0ee7e9efc" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.155733 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.373879 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7"] Feb 23 18:54:51 crc kubenswrapper[4768]: E0223 18:54:51.374988 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7d9a362-95f1-4326-99a7-121ec8a4816f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.375017 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7d9a362-95f1-4326-99a7-121ec8a4816f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.375259 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7d9a362-95f1-4326-99a7-121ec8a4816f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.376184 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.378943 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.379277 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.379107 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.380030 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.389281 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7"] Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.479916 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4jl4\" (UniqueName: \"kubernetes.io/projected/34748e05-17f0-4701-936b-a023c3456a93-kube-api-access-h4jl4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hqm7\" (UID: \"34748e05-17f0-4701-936b-a023c3456a93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.480020 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34748e05-17f0-4701-936b-a023c3456a93-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hqm7\" (UID: \"34748e05-17f0-4701-936b-a023c3456a93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.480272 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34748e05-17f0-4701-936b-a023c3456a93-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hqm7\" (UID: \"34748e05-17f0-4701-936b-a023c3456a93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.582440 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34748e05-17f0-4701-936b-a023c3456a93-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hqm7\" (UID: \"34748e05-17f0-4701-936b-a023c3456a93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.582575 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4jl4\" (UniqueName: \"kubernetes.io/projected/34748e05-17f0-4701-936b-a023c3456a93-kube-api-access-h4jl4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hqm7\" (UID: \"34748e05-17f0-4701-936b-a023c3456a93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.582629 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34748e05-17f0-4701-936b-a023c3456a93-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hqm7\" (UID: \"34748e05-17f0-4701-936b-a023c3456a93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.587086 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34748e05-17f0-4701-936b-a023c3456a93-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hqm7\" (UID: \"34748e05-17f0-4701-936b-a023c3456a93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.593870 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34748e05-17f0-4701-936b-a023c3456a93-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hqm7\" (UID: \"34748e05-17f0-4701-936b-a023c3456a93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.625669 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4jl4\" (UniqueName: \"kubernetes.io/projected/34748e05-17f0-4701-936b-a023c3456a93-kube-api-access-h4jl4\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9hqm7\" (UID: \"34748e05-17f0-4701-936b-a023c3456a93\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" Feb 23 18:54:51 crc kubenswrapper[4768]: I0223 18:54:51.705234 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" Feb 23 18:54:52 crc kubenswrapper[4768]: I0223 18:54:52.114577 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7"] Feb 23 18:54:52 crc kubenswrapper[4768]: I0223 18:54:52.167702 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" event={"ID":"34748e05-17f0-4701-936b-a023c3456a93","Type":"ContainerStarted","Data":"fe4185357f0cbd63857d41b67a5a3c202335b6994c7aad6191f5c7f86299d2ac"} Feb 23 18:54:53 crc kubenswrapper[4768]: I0223 18:54:53.181376 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" event={"ID":"34748e05-17f0-4701-936b-a023c3456a93","Type":"ContainerStarted","Data":"aba7d82e3de93020ec1d60e225889003bf9b26e9109c5bb28b2786d05826c666"} Feb 23 18:54:53 crc kubenswrapper[4768]: I0223 18:54:53.206825 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" podStartSLOduration=1.577226341 podStartE2EDuration="2.20679844s" podCreationTimestamp="2026-02-23 18:54:51 +0000 UTC" firstStartedPulling="2026-02-23 18:54:52.129360694 +0000 UTC m=+1287.519846494" lastFinishedPulling="2026-02-23 18:54:52.758932783 +0000 UTC m=+1288.149418593" observedRunningTime="2026-02-23 18:54:53.203778357 +0000 UTC m=+1288.594264167" watchObservedRunningTime="2026-02-23 18:54:53.20679844 +0000 UTC m=+1288.597284250" Feb 23 18:54:56 crc kubenswrapper[4768]: I0223 18:54:56.226100 4768 generic.go:334] "Generic (PLEG): container finished" podID="34748e05-17f0-4701-936b-a023c3456a93" containerID="aba7d82e3de93020ec1d60e225889003bf9b26e9109c5bb28b2786d05826c666" exitCode=0 Feb 23 18:54:56 crc kubenswrapper[4768]: I0223 18:54:56.226205 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" event={"ID":"34748e05-17f0-4701-936b-a023c3456a93","Type":"ContainerDied","Data":"aba7d82e3de93020ec1d60e225889003bf9b26e9109c5bb28b2786d05826c666"} Feb 23 18:54:57 crc kubenswrapper[4768]: I0223 18:54:57.798640 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" Feb 23 18:54:57 crc kubenswrapper[4768]: I0223 18:54:57.935583 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34748e05-17f0-4701-936b-a023c3456a93-inventory\") pod \"34748e05-17f0-4701-936b-a023c3456a93\" (UID: \"34748e05-17f0-4701-936b-a023c3456a93\") " Feb 23 18:54:57 crc kubenswrapper[4768]: I0223 18:54:57.935684 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34748e05-17f0-4701-936b-a023c3456a93-ssh-key-openstack-edpm-ipam\") pod \"34748e05-17f0-4701-936b-a023c3456a93\" (UID: \"34748e05-17f0-4701-936b-a023c3456a93\") " Feb 23 18:54:57 crc kubenswrapper[4768]: I0223 18:54:57.935981 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4jl4\" (UniqueName: \"kubernetes.io/projected/34748e05-17f0-4701-936b-a023c3456a93-kube-api-access-h4jl4\") pod \"34748e05-17f0-4701-936b-a023c3456a93\" (UID: \"34748e05-17f0-4701-936b-a023c3456a93\") " Feb 23 18:54:57 crc kubenswrapper[4768]: I0223 18:54:57.946653 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34748e05-17f0-4701-936b-a023c3456a93-kube-api-access-h4jl4" (OuterVolumeSpecName: "kube-api-access-h4jl4") pod "34748e05-17f0-4701-936b-a023c3456a93" (UID: "34748e05-17f0-4701-936b-a023c3456a93"). InnerVolumeSpecName "kube-api-access-h4jl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:54:57 crc kubenswrapper[4768]: I0223 18:54:57.972042 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34748e05-17f0-4701-936b-a023c3456a93-inventory" (OuterVolumeSpecName: "inventory") pod "34748e05-17f0-4701-936b-a023c3456a93" (UID: "34748e05-17f0-4701-936b-a023c3456a93"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:54:57 crc kubenswrapper[4768]: I0223 18:54:57.981730 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34748e05-17f0-4701-936b-a023c3456a93-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "34748e05-17f0-4701-936b-a023c3456a93" (UID: "34748e05-17f0-4701-936b-a023c3456a93"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.038452 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4jl4\" (UniqueName: \"kubernetes.io/projected/34748e05-17f0-4701-936b-a023c3456a93-kube-api-access-h4jl4\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.038503 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/34748e05-17f0-4701-936b-a023c3456a93-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.038524 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/34748e05-17f0-4701-936b-a023c3456a93-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.257283 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" event={"ID":"34748e05-17f0-4701-936b-a023c3456a93","Type":"ContainerDied","Data":"fe4185357f0cbd63857d41b67a5a3c202335b6994c7aad6191f5c7f86299d2ac"} Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.257329 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe4185357f0cbd63857d41b67a5a3c202335b6994c7aad6191f5c7f86299d2ac" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.257715 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9hqm7" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.347611 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc"] Feb 23 18:54:58 crc kubenswrapper[4768]: E0223 18:54:58.348300 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34748e05-17f0-4701-936b-a023c3456a93" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.348327 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="34748e05-17f0-4701-936b-a023c3456a93" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.348619 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="34748e05-17f0-4701-936b-a023c3456a93" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.349736 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.353276 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.353426 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.353532 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.353647 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.363927 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc"] Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.446191 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj4ww\" (UniqueName: \"kubernetes.io/projected/dbe6c2e2-e359-4953-848a-c06651ec5760-kube-api-access-tj4ww\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.446951 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.447047 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.447152 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.549960 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj4ww\" (UniqueName: \"kubernetes.io/projected/dbe6c2e2-e359-4953-848a-c06651ec5760-kube-api-access-tj4ww\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.550147 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.550223 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.550285 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.560038 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.561687 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.566842 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.589173 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj4ww\" (UniqueName: \"kubernetes.io/projected/dbe6c2e2-e359-4953-848a-c06651ec5760-kube-api-access-tj4ww\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.681665 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:54:58 crc kubenswrapper[4768]: I0223 18:54:58.969420 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 23 18:54:59 crc kubenswrapper[4768]: I0223 18:54:59.009583 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 23 18:54:59 crc kubenswrapper[4768]: I0223 18:54:59.287727 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc"] Feb 23 18:55:00 crc kubenswrapper[4768]: I0223 18:55:00.281162 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" event={"ID":"dbe6c2e2-e359-4953-848a-c06651ec5760","Type":"ContainerStarted","Data":"36daa127032b4d9ffe6b883a1942fd6b47cd9e4340eeef8002bef1c1b2e67757"} Feb 23 18:55:00 crc kubenswrapper[4768]: I0223 18:55:00.281989 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" event={"ID":"dbe6c2e2-e359-4953-848a-c06651ec5760","Type":"ContainerStarted","Data":"61b4ba8effeb4b6e90c5affcbcccd6ad0e40b748e9632f88fe4707aaf12758dc"} Feb 23 18:55:00 crc kubenswrapper[4768]: I0223 18:55:00.327060 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" podStartSLOduration=1.94738231 podStartE2EDuration="2.32702711s" podCreationTimestamp="2026-02-23 18:54:58 +0000 UTC" firstStartedPulling="2026-02-23 18:54:59.297824043 +0000 UTC m=+1294.688309843" lastFinishedPulling="2026-02-23 18:54:59.677468843 +0000 UTC m=+1295.067954643" observedRunningTime="2026-02-23 18:55:00.309518911 +0000 UTC m=+1295.700004711" watchObservedRunningTime="2026-02-23 18:55:00.32702711 +0000 UTC m=+1295.717512950" Feb 23 18:55:37 crc kubenswrapper[4768]: I0223 18:55:37.978791 4768 scope.go:117] "RemoveContainer" containerID="861026f797e844d6e86a3e0b73a0016d3fab7399ae6f82d8aad40e6d60de1847" Feb 23 18:55:38 crc kubenswrapper[4768]: I0223 18:55:38.021164 4768 scope.go:117] "RemoveContainer" containerID="e225380e02494ec42b14a00ef618931f63d766367eccf9085b5b44f5a893e725" Feb 23 18:55:38 crc kubenswrapper[4768]: I0223 18:55:38.104792 4768 scope.go:117] "RemoveContainer" containerID="c1c2091e11192e05561807845a68870367f3725dfb72a474d2be078b72b2d602" Feb 23 18:55:39 crc kubenswrapper[4768]: I0223 18:55:39.545538 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:55:39 crc kubenswrapper[4768]: I0223 18:55:39.545631 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:56:09 crc kubenswrapper[4768]: I0223 18:56:09.545516 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:56:09 crc kubenswrapper[4768]: I0223 18:56:09.546327 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:56:38 crc kubenswrapper[4768]: I0223 18:56:38.236145 4768 scope.go:117] "RemoveContainer" containerID="86e1f1432890eb125a20d8caa185e86c98fda054fe4d0053804ce9dd6bb0dcd2" Feb 23 18:56:39 crc kubenswrapper[4768]: I0223 18:56:39.545071 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:56:39 crc kubenswrapper[4768]: I0223 18:56:39.545163 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:56:39 crc kubenswrapper[4768]: I0223 18:56:39.545229 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:56:39 crc kubenswrapper[4768]: I0223 18:56:39.546381 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7a03e90e8abb2a600d31f3e0012982bff2d626a93cfd11b950ec9a0d827a80c"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:56:39 crc kubenswrapper[4768]: I0223 18:56:39.546823 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://c7a03e90e8abb2a600d31f3e0012982bff2d626a93cfd11b950ec9a0d827a80c" gracePeriod=600 Feb 23 18:56:39 crc kubenswrapper[4768]: I0223 18:56:39.920920 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="c7a03e90e8abb2a600d31f3e0012982bff2d626a93cfd11b950ec9a0d827a80c" exitCode=0 Feb 23 18:56:39 crc kubenswrapper[4768]: I0223 18:56:39.921004 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"c7a03e90e8abb2a600d31f3e0012982bff2d626a93cfd11b950ec9a0d827a80c"} Feb 23 18:56:39 crc kubenswrapper[4768]: I0223 18:56:39.921130 4768 scope.go:117] "RemoveContainer" containerID="45df64eeeccd82b6a979c0ae4c5ed47e40e22edac6d562f0aee3b3732227d91f" Feb 23 18:56:40 crc kubenswrapper[4768]: I0223 18:56:40.938491 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88"} Feb 23 18:58:39 crc kubenswrapper[4768]: I0223 18:58:39.545474 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:58:39 crc kubenswrapper[4768]: I0223 18:58:39.546201 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:59:09 crc kubenswrapper[4768]: I0223 18:59:09.546587 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:59:09 crc kubenswrapper[4768]: I0223 18:59:09.547628 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:59:21 crc kubenswrapper[4768]: I0223 18:59:21.067932 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rsj29"] Feb 23 18:59:21 crc kubenswrapper[4768]: I0223 18:59:21.092090 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a794-account-create-update-wccgz"] Feb 23 18:59:21 crc kubenswrapper[4768]: I0223 18:59:21.105401 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rsj29"] Feb 23 18:59:21 crc kubenswrapper[4768]: I0223 18:59:21.114624 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a794-account-create-update-wccgz"] Feb 23 18:59:21 crc kubenswrapper[4768]: I0223 18:59:21.327874 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45ebc246-b507-4457-a2e3-be3ac8ab0aee" path="/var/lib/kubelet/pods/45ebc246-b507-4457-a2e3-be3ac8ab0aee/volumes" Feb 23 18:59:21 crc kubenswrapper[4768]: I0223 18:59:21.329113 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3cff39d-7895-4fa0-ac21-900198443faf" path="/var/lib/kubelet/pods/e3cff39d-7895-4fa0-ac21-900198443faf/volumes" Feb 23 18:59:22 crc kubenswrapper[4768]: I0223 18:59:22.052011 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-t2chz"] Feb 23 18:59:22 crc kubenswrapper[4768]: I0223 18:59:22.074868 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-nzx66"] Feb 23 18:59:22 crc kubenswrapper[4768]: I0223 18:59:22.085345 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-05ea-account-create-update-8q99w"] Feb 23 18:59:22 crc kubenswrapper[4768]: I0223 18:59:22.118752 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-nzx66"] Feb 23 18:59:22 crc kubenswrapper[4768]: I0223 18:59:22.139211 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-05ea-account-create-update-8q99w"] Feb 23 18:59:22 crc kubenswrapper[4768]: I0223 18:59:22.150872 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-t2chz"] Feb 23 18:59:23 crc kubenswrapper[4768]: I0223 18:59:23.043517 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4ba9-account-create-update-8w72x"] Feb 23 18:59:23 crc kubenswrapper[4768]: I0223 18:59:23.059510 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4ba9-account-create-update-8w72x"] Feb 23 18:59:23 crc kubenswrapper[4768]: I0223 18:59:23.323668 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d54474d-4da4-4a70-8505-4cee013ef52a" path="/var/lib/kubelet/pods/4d54474d-4da4-4a70-8505-4cee013ef52a/volumes" Feb 23 18:59:23 crc kubenswrapper[4768]: I0223 18:59:23.324290 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c4f6d48-0066-49cb-976b-03567c12faa5" path="/var/lib/kubelet/pods/8c4f6d48-0066-49cb-976b-03567c12faa5/volumes" Feb 23 18:59:23 crc kubenswrapper[4768]: I0223 18:59:23.324825 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd209e6b-7e31-4061-9cfe-2bfa6b279c76" path="/var/lib/kubelet/pods/dd209e6b-7e31-4061-9cfe-2bfa6b279c76/volumes" Feb 23 18:59:23 crc kubenswrapper[4768]: I0223 18:59:23.325360 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f225be18-66fe-405e-890e-51d17f889971" path="/var/lib/kubelet/pods/f225be18-66fe-405e-890e-51d17f889971/volumes" Feb 23 18:59:36 crc kubenswrapper[4768]: I0223 18:59:36.092242 4768 generic.go:334] "Generic (PLEG): container finished" podID="dbe6c2e2-e359-4953-848a-c06651ec5760" containerID="36daa127032b4d9ffe6b883a1942fd6b47cd9e4340eeef8002bef1c1b2e67757" exitCode=0 Feb 23 18:59:36 crc kubenswrapper[4768]: I0223 18:59:36.092510 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" event={"ID":"dbe6c2e2-e359-4953-848a-c06651ec5760","Type":"ContainerDied","Data":"36daa127032b4d9ffe6b883a1942fd6b47cd9e4340eeef8002bef1c1b2e67757"} Feb 23 18:59:37 crc kubenswrapper[4768]: I0223 18:59:37.611034 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:59:37 crc kubenswrapper[4768]: I0223 18:59:37.799725 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-bootstrap-combined-ca-bundle\") pod \"dbe6c2e2-e359-4953-848a-c06651ec5760\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " Feb 23 18:59:37 crc kubenswrapper[4768]: I0223 18:59:37.799795 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-ssh-key-openstack-edpm-ipam\") pod \"dbe6c2e2-e359-4953-848a-c06651ec5760\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " Feb 23 18:59:37 crc kubenswrapper[4768]: I0223 18:59:37.799899 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-inventory\") pod \"dbe6c2e2-e359-4953-848a-c06651ec5760\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " Feb 23 18:59:37 crc kubenswrapper[4768]: I0223 18:59:37.799929 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj4ww\" (UniqueName: \"kubernetes.io/projected/dbe6c2e2-e359-4953-848a-c06651ec5760-kube-api-access-tj4ww\") pod \"dbe6c2e2-e359-4953-848a-c06651ec5760\" (UID: \"dbe6c2e2-e359-4953-848a-c06651ec5760\") " Feb 23 18:59:37 crc kubenswrapper[4768]: I0223 18:59:37.808026 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbe6c2e2-e359-4953-848a-c06651ec5760-kube-api-access-tj4ww" (OuterVolumeSpecName: "kube-api-access-tj4ww") pod "dbe6c2e2-e359-4953-848a-c06651ec5760" (UID: "dbe6c2e2-e359-4953-848a-c06651ec5760"). InnerVolumeSpecName "kube-api-access-tj4ww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 18:59:37 crc kubenswrapper[4768]: I0223 18:59:37.813234 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "dbe6c2e2-e359-4953-848a-c06651ec5760" (UID: "dbe6c2e2-e359-4953-848a-c06651ec5760"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:59:37 crc kubenswrapper[4768]: I0223 18:59:37.846495 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-inventory" (OuterVolumeSpecName: "inventory") pod "dbe6c2e2-e359-4953-848a-c06651ec5760" (UID: "dbe6c2e2-e359-4953-848a-c06651ec5760"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:59:37 crc kubenswrapper[4768]: I0223 18:59:37.856021 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dbe6c2e2-e359-4953-848a-c06651ec5760" (UID: "dbe6c2e2-e359-4953-848a-c06651ec5760"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 18:59:37 crc kubenswrapper[4768]: I0223 18:59:37.905602 4768 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 18:59:37 crc kubenswrapper[4768]: I0223 18:59:37.905695 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 18:59:37 crc kubenswrapper[4768]: I0223 18:59:37.906217 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbe6c2e2-e359-4953-848a-c06651ec5760-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 18:59:37 crc kubenswrapper[4768]: I0223 18:59:37.906283 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj4ww\" (UniqueName: \"kubernetes.io/projected/dbe6c2e2-e359-4953-848a-c06651ec5760-kube-api-access-tj4ww\") on node \"crc\" DevicePath \"\"" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.124226 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" event={"ID":"dbe6c2e2-e359-4953-848a-c06651ec5760","Type":"ContainerDied","Data":"61b4ba8effeb4b6e90c5affcbcccd6ad0e40b748e9632f88fe4707aaf12758dc"} Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.124308 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b4ba8effeb4b6e90c5affcbcccd6ad0e40b748e9632f88fe4707aaf12758dc" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.124354 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.230804 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7"] Feb 23 18:59:38 crc kubenswrapper[4768]: E0223 18:59:38.231292 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbe6c2e2-e359-4953-848a-c06651ec5760" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.231311 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbe6c2e2-e359-4953-848a-c06651ec5760" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.231537 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbe6c2e2-e359-4953-848a-c06651ec5760" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.232194 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.234481 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.234513 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.234906 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.235291 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.244376 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7"] Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.416302 4768 scope.go:117] "RemoveContainer" containerID="6fbcefd90a1f0ed51dbcbe89f7eefa6f52a6f9090a143e5bb1e180f6866d8542" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.418552 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/964d25fb-0600-4332-9f40-85f700d35088-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7\" (UID: \"964d25fb-0600-4332-9f40-85f700d35088\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.419160 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/964d25fb-0600-4332-9f40-85f700d35088-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7\" (UID: \"964d25fb-0600-4332-9f40-85f700d35088\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.419229 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qftql\" (UniqueName: \"kubernetes.io/projected/964d25fb-0600-4332-9f40-85f700d35088-kube-api-access-qftql\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7\" (UID: \"964d25fb-0600-4332-9f40-85f700d35088\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.443194 4768 scope.go:117] "RemoveContainer" containerID="9a02329cf4330379c24f2953dd8142a088b8e510800674af4a32fbe1ea54c7cc" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.466878 4768 scope.go:117] "RemoveContainer" containerID="260a905565f25427c0e6ced6534920f624adf74de23542e90163c2a07951e183" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.498096 4768 scope.go:117] "RemoveContainer" containerID="82fa4221546de4c5ee73c233c6b679d35faa8293f26e94f1d5734bf1bccca6a1" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.521230 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/964d25fb-0600-4332-9f40-85f700d35088-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7\" (UID: \"964d25fb-0600-4332-9f40-85f700d35088\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.521307 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qftql\" (UniqueName: \"kubernetes.io/projected/964d25fb-0600-4332-9f40-85f700d35088-kube-api-access-qftql\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7\" (UID: \"964d25fb-0600-4332-9f40-85f700d35088\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.521396 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/964d25fb-0600-4332-9f40-85f700d35088-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7\" (UID: \"964d25fb-0600-4332-9f40-85f700d35088\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.526298 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/964d25fb-0600-4332-9f40-85f700d35088-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7\" (UID: \"964d25fb-0600-4332-9f40-85f700d35088\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.528398 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/964d25fb-0600-4332-9f40-85f700d35088-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7\" (UID: \"964d25fb-0600-4332-9f40-85f700d35088\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.541780 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qftql\" (UniqueName: \"kubernetes.io/projected/964d25fb-0600-4332-9f40-85f700d35088-kube-api-access-qftql\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7\" (UID: \"964d25fb-0600-4332-9f40-85f700d35088\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.587870 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.707001 4768 scope.go:117] "RemoveContainer" containerID="14adec7acd33fc66a67410556b64fa408160758caa217771fdd1c55cf9c3d7c6" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.747424 4768 scope.go:117] "RemoveContainer" containerID="7a919a0fab20f4dd5814b43ec678770debc5e111833322911c5ddec9cb8d46a8" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.807419 4768 scope.go:117] "RemoveContainer" containerID="241e453d5d20186226ad1c1a109ad47c3a1d3b0d5d23306d9fe6412494c22095" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.858319 4768 scope.go:117] "RemoveContainer" containerID="c6e7a9b66d2250d2d559c9e3f04cf4e4171be2eb7100baee26e37a967f56f49e" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.886516 4768 scope.go:117] "RemoveContainer" containerID="980acb98501da422063ed41656c9fbccdf7f1d1c7379ad5ba13d197475767191" Feb 23 18:59:38 crc kubenswrapper[4768]: I0223 18:59:38.913345 4768 scope.go:117] "RemoveContainer" containerID="69a91a8d03f8f1668cb256e927f35542f2267992caece74e93ef2a0e4cd6bcc3" Feb 23 18:59:39 crc kubenswrapper[4768]: I0223 18:59:39.194571 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7"] Feb 23 18:59:39 crc kubenswrapper[4768]: I0223 18:59:39.200512 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 18:59:39 crc kubenswrapper[4768]: I0223 18:59:39.545400 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 18:59:39 crc kubenswrapper[4768]: I0223 18:59:39.545509 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 18:59:39 crc kubenswrapper[4768]: I0223 18:59:39.545606 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 18:59:39 crc kubenswrapper[4768]: I0223 18:59:39.547511 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 18:59:39 crc kubenswrapper[4768]: I0223 18:59:39.547657 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" gracePeriod=600 Feb 23 18:59:39 crc kubenswrapper[4768]: E0223 18:59:39.720625 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 18:59:40 crc kubenswrapper[4768]: I0223 18:59:40.147143 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" exitCode=0 Feb 23 18:59:40 crc kubenswrapper[4768]: I0223 18:59:40.147223 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88"} Feb 23 18:59:40 crc kubenswrapper[4768]: I0223 18:59:40.147283 4768 scope.go:117] "RemoveContainer" containerID="c7a03e90e8abb2a600d31f3e0012982bff2d626a93cfd11b950ec9a0d827a80c" Feb 23 18:59:40 crc kubenswrapper[4768]: I0223 18:59:40.148047 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 18:59:40 crc kubenswrapper[4768]: E0223 18:59:40.148476 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 18:59:40 crc kubenswrapper[4768]: I0223 18:59:40.151357 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" event={"ID":"964d25fb-0600-4332-9f40-85f700d35088","Type":"ContainerStarted","Data":"263ecee1fc741ae53e999634807a4e9f8652c15e02b0178a6a23c3ac1451ccb0"} Feb 23 18:59:40 crc kubenswrapper[4768]: I0223 18:59:40.151388 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" event={"ID":"964d25fb-0600-4332-9f40-85f700d35088","Type":"ContainerStarted","Data":"47db2a5435e7ef333cb34cb9bac7d47c20379106a0879ce89af26c27889e8de3"} Feb 23 18:59:40 crc kubenswrapper[4768]: I0223 18:59:40.197737 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" podStartSLOduration=1.774340611 podStartE2EDuration="2.197710908s" podCreationTimestamp="2026-02-23 18:59:38 +0000 UTC" firstStartedPulling="2026-02-23 18:59:39.200090783 +0000 UTC m=+1574.590576593" lastFinishedPulling="2026-02-23 18:59:39.62346105 +0000 UTC m=+1575.013946890" observedRunningTime="2026-02-23 18:59:40.182650374 +0000 UTC m=+1575.573136174" watchObservedRunningTime="2026-02-23 18:59:40.197710908 +0000 UTC m=+1575.588196728" Feb 23 18:59:46 crc kubenswrapper[4768]: I0223 18:59:46.054005 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-42xrb"] Feb 23 18:59:46 crc kubenswrapper[4768]: I0223 18:59:46.065511 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-42xrb"] Feb 23 18:59:47 crc kubenswrapper[4768]: I0223 18:59:47.344069 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8580c06d-92c6-47e7-99ff-21b0ea32de64" path="/var/lib/kubelet/pods/8580c06d-92c6-47e7-99ff-21b0ea32de64/volumes" Feb 23 18:59:48 crc kubenswrapper[4768]: I0223 18:59:48.049553 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-t6rxx"] Feb 23 18:59:48 crc kubenswrapper[4768]: I0223 18:59:48.063386 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-t6rxx"] Feb 23 18:59:48 crc kubenswrapper[4768]: I0223 18:59:48.080134 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-9ntcv"] Feb 23 18:59:48 crc kubenswrapper[4768]: I0223 18:59:48.089213 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-9ntcv"] Feb 23 18:59:49 crc kubenswrapper[4768]: I0223 18:59:49.321009 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07034a67-ca3d-4e5f-936a-b32c08b85724" path="/var/lib/kubelet/pods/07034a67-ca3d-4e5f-936a-b32c08b85724/volumes" Feb 23 18:59:49 crc kubenswrapper[4768]: I0223 18:59:49.322044 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2936a6fe-a582-43cb-a967-e99ba45903ea" path="/var/lib/kubelet/pods/2936a6fe-a582-43cb-a967-e99ba45903ea/volumes" Feb 23 18:59:52 crc kubenswrapper[4768]: I0223 18:59:52.056443 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-016f-account-create-update-vckb8"] Feb 23 18:59:52 crc kubenswrapper[4768]: I0223 18:59:52.069404 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-xbwcw"] Feb 23 18:59:52 crc kubenswrapper[4768]: I0223 18:59:52.081214 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cd70-account-create-update-59jt8"] Feb 23 18:59:52 crc kubenswrapper[4768]: I0223 18:59:52.090784 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-016f-account-create-update-vckb8"] Feb 23 18:59:52 crc kubenswrapper[4768]: I0223 18:59:52.100467 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-195c-account-create-update-2pdfs"] Feb 23 18:59:52 crc kubenswrapper[4768]: I0223 18:59:52.110236 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cd70-account-create-update-59jt8"] Feb 23 18:59:52 crc kubenswrapper[4768]: I0223 18:59:52.143263 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-xbwcw"] Feb 23 18:59:52 crc kubenswrapper[4768]: I0223 18:59:52.160725 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-195c-account-create-update-2pdfs"] Feb 23 18:59:53 crc kubenswrapper[4768]: I0223 18:59:53.329644 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dee7c39-12a7-42a0-8c19-3420b5dcb63e" path="/var/lib/kubelet/pods/3dee7c39-12a7-42a0-8c19-3420b5dcb63e/volumes" Feb 23 18:59:53 crc kubenswrapper[4768]: I0223 18:59:53.331965 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a997222-9831-4e01-ac9b-34383ec3649e" path="/var/lib/kubelet/pods/4a997222-9831-4e01-ac9b-34383ec3649e/volumes" Feb 23 18:59:53 crc kubenswrapper[4768]: I0223 18:59:53.333068 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="786b1f7f-e1c7-4002-a1db-33c44f0ad098" path="/var/lib/kubelet/pods/786b1f7f-e1c7-4002-a1db-33c44f0ad098/volumes" Feb 23 18:59:53 crc kubenswrapper[4768]: I0223 18:59:53.334377 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c008f19-c09b-4721-9a15-9851f9a516ab" path="/var/lib/kubelet/pods/7c008f19-c09b-4721-9a15-9851f9a516ab/volumes" Feb 23 18:59:55 crc kubenswrapper[4768]: I0223 18:59:55.316892 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 18:59:55 crc kubenswrapper[4768]: E0223 18:59:55.317748 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 18:59:57 crc kubenswrapper[4768]: I0223 18:59:57.066509 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-rfckb"] Feb 23 18:59:57 crc kubenswrapper[4768]: I0223 18:59:57.082776 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-rfckb"] Feb 23 18:59:57 crc kubenswrapper[4768]: I0223 18:59:57.357814 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="513bdad8-19c5-4fea-aaef-afecd7f21ab3" path="/var/lib/kubelet/pods/513bdad8-19c5-4fea-aaef-afecd7f21ab3/volumes" Feb 23 18:59:59 crc kubenswrapper[4768]: I0223 18:59:59.046704 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-c8v6w"] Feb 23 18:59:59 crc kubenswrapper[4768]: I0223 18:59:59.058886 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-c8v6w"] Feb 23 18:59:59 crc kubenswrapper[4768]: I0223 18:59:59.331972 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b252582c-b708-4d5d-be78-dc90b4bd3990" path="/var/lib/kubelet/pods/b252582c-b708-4d5d-be78-dc90b4bd3990/volumes" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.157376 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw"] Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.159334 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.162579 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.162770 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.176908 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw"] Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.257089 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txrcx\" (UniqueName: \"kubernetes.io/projected/cb735541-cf3e-4a2a-afd4-05e9a11d0364-kube-api-access-txrcx\") pod \"collect-profiles-29531220-prhtw\" (UID: \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.257224 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb735541-cf3e-4a2a-afd4-05e9a11d0364-secret-volume\") pod \"collect-profiles-29531220-prhtw\" (UID: \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.257363 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb735541-cf3e-4a2a-afd4-05e9a11d0364-config-volume\") pod \"collect-profiles-29531220-prhtw\" (UID: \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.364632 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb735541-cf3e-4a2a-afd4-05e9a11d0364-config-volume\") pod \"collect-profiles-29531220-prhtw\" (UID: \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.365009 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txrcx\" (UniqueName: \"kubernetes.io/projected/cb735541-cf3e-4a2a-afd4-05e9a11d0364-kube-api-access-txrcx\") pod \"collect-profiles-29531220-prhtw\" (UID: \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.365284 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb735541-cf3e-4a2a-afd4-05e9a11d0364-secret-volume\") pod \"collect-profiles-29531220-prhtw\" (UID: \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.365873 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb735541-cf3e-4a2a-afd4-05e9a11d0364-config-volume\") pod \"collect-profiles-29531220-prhtw\" (UID: \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.378891 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb735541-cf3e-4a2a-afd4-05e9a11d0364-secret-volume\") pod \"collect-profiles-29531220-prhtw\" (UID: \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.392681 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dfbvn"] Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.393582 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txrcx\" (UniqueName: \"kubernetes.io/projected/cb735541-cf3e-4a2a-afd4-05e9a11d0364-kube-api-access-txrcx\") pod \"collect-profiles-29531220-prhtw\" (UID: \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.394745 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.404166 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dfbvn"] Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.467662 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a5d858-2600-4e56-92e5-84305552296e-utilities\") pod \"redhat-operators-dfbvn\" (UID: \"45a5d858-2600-4e56-92e5-84305552296e\") " pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.467758 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a5d858-2600-4e56-92e5-84305552296e-catalog-content\") pod \"redhat-operators-dfbvn\" (UID: \"45a5d858-2600-4e56-92e5-84305552296e\") " pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.468035 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tld4\" (UniqueName: \"kubernetes.io/projected/45a5d858-2600-4e56-92e5-84305552296e-kube-api-access-2tld4\") pod \"redhat-operators-dfbvn\" (UID: \"45a5d858-2600-4e56-92e5-84305552296e\") " pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.483376 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.572942 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a5d858-2600-4e56-92e5-84305552296e-utilities\") pod \"redhat-operators-dfbvn\" (UID: \"45a5d858-2600-4e56-92e5-84305552296e\") " pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.573022 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a5d858-2600-4e56-92e5-84305552296e-catalog-content\") pod \"redhat-operators-dfbvn\" (UID: \"45a5d858-2600-4e56-92e5-84305552296e\") " pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.573049 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tld4\" (UniqueName: \"kubernetes.io/projected/45a5d858-2600-4e56-92e5-84305552296e-kube-api-access-2tld4\") pod \"redhat-operators-dfbvn\" (UID: \"45a5d858-2600-4e56-92e5-84305552296e\") " pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.574039 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a5d858-2600-4e56-92e5-84305552296e-utilities\") pod \"redhat-operators-dfbvn\" (UID: \"45a5d858-2600-4e56-92e5-84305552296e\") " pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.574322 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a5d858-2600-4e56-92e5-84305552296e-catalog-content\") pod \"redhat-operators-dfbvn\" (UID: \"45a5d858-2600-4e56-92e5-84305552296e\") " pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.613236 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tld4\" (UniqueName: \"kubernetes.io/projected/45a5d858-2600-4e56-92e5-84305552296e-kube-api-access-2tld4\") pod \"redhat-operators-dfbvn\" (UID: \"45a5d858-2600-4e56-92e5-84305552296e\") " pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.899950 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:00 crc kubenswrapper[4768]: I0223 19:00:00.964003 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw"] Feb 23 19:00:01 crc kubenswrapper[4768]: I0223 19:00:01.437915 4768 generic.go:334] "Generic (PLEG): container finished" podID="cb735541-cf3e-4a2a-afd4-05e9a11d0364" containerID="6c41a52d10b985ccb4667fa0792cf3a1076bb5608f35c8301addc75a936ce589" exitCode=0 Feb 23 19:00:01 crc kubenswrapper[4768]: I0223 19:00:01.438204 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" event={"ID":"cb735541-cf3e-4a2a-afd4-05e9a11d0364","Type":"ContainerDied","Data":"6c41a52d10b985ccb4667fa0792cf3a1076bb5608f35c8301addc75a936ce589"} Feb 23 19:00:01 crc kubenswrapper[4768]: I0223 19:00:01.438385 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" event={"ID":"cb735541-cf3e-4a2a-afd4-05e9a11d0364","Type":"ContainerStarted","Data":"27dea3e5e1dcce61dc39f22c3a976e5dd377902faabf235114b77877d83d154c"} Feb 23 19:00:01 crc kubenswrapper[4768]: I0223 19:00:01.451225 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dfbvn"] Feb 23 19:00:01 crc kubenswrapper[4768]: W0223 19:00:01.485916 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45a5d858_2600_4e56_92e5_84305552296e.slice/crio-dd1048cbf8bb443ec30a62292d6976552c036890f60b7e4e6efe19914d67c3df WatchSource:0}: Error finding container dd1048cbf8bb443ec30a62292d6976552c036890f60b7e4e6efe19914d67c3df: Status 404 returned error can't find the container with id dd1048cbf8bb443ec30a62292d6976552c036890f60b7e4e6efe19914d67c3df Feb 23 19:00:02 crc kubenswrapper[4768]: I0223 19:00:02.456715 4768 generic.go:334] "Generic (PLEG): container finished" podID="45a5d858-2600-4e56-92e5-84305552296e" containerID="be9f02983f4525ef44eb18b28a053d7a9c81fb01a733f83a51b66374101ead25" exitCode=0 Feb 23 19:00:02 crc kubenswrapper[4768]: I0223 19:00:02.457388 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfbvn" event={"ID":"45a5d858-2600-4e56-92e5-84305552296e","Type":"ContainerDied","Data":"be9f02983f4525ef44eb18b28a053d7a9c81fb01a733f83a51b66374101ead25"} Feb 23 19:00:02 crc kubenswrapper[4768]: I0223 19:00:02.457474 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfbvn" event={"ID":"45a5d858-2600-4e56-92e5-84305552296e","Type":"ContainerStarted","Data":"dd1048cbf8bb443ec30a62292d6976552c036890f60b7e4e6efe19914d67c3df"} Feb 23 19:00:02 crc kubenswrapper[4768]: I0223 19:00:02.878519 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" Feb 23 19:00:03 crc kubenswrapper[4768]: I0223 19:00:03.030160 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txrcx\" (UniqueName: \"kubernetes.io/projected/cb735541-cf3e-4a2a-afd4-05e9a11d0364-kube-api-access-txrcx\") pod \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\" (UID: \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\") " Feb 23 19:00:03 crc kubenswrapper[4768]: I0223 19:00:03.030284 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb735541-cf3e-4a2a-afd4-05e9a11d0364-config-volume\") pod \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\" (UID: \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\") " Feb 23 19:00:03 crc kubenswrapper[4768]: I0223 19:00:03.030501 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb735541-cf3e-4a2a-afd4-05e9a11d0364-secret-volume\") pod \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\" (UID: \"cb735541-cf3e-4a2a-afd4-05e9a11d0364\") " Feb 23 19:00:03 crc kubenswrapper[4768]: I0223 19:00:03.032150 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb735541-cf3e-4a2a-afd4-05e9a11d0364-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb735541-cf3e-4a2a-afd4-05e9a11d0364" (UID: "cb735541-cf3e-4a2a-afd4-05e9a11d0364"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 19:00:03 crc kubenswrapper[4768]: I0223 19:00:03.038594 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb735541-cf3e-4a2a-afd4-05e9a11d0364-kube-api-access-txrcx" (OuterVolumeSpecName: "kube-api-access-txrcx") pod "cb735541-cf3e-4a2a-afd4-05e9a11d0364" (UID: "cb735541-cf3e-4a2a-afd4-05e9a11d0364"). InnerVolumeSpecName "kube-api-access-txrcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:00:03 crc kubenswrapper[4768]: I0223 19:00:03.040529 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb735541-cf3e-4a2a-afd4-05e9a11d0364-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cb735541-cf3e-4a2a-afd4-05e9a11d0364" (UID: "cb735541-cf3e-4a2a-afd4-05e9a11d0364"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:00:03 crc kubenswrapper[4768]: I0223 19:00:03.132959 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb735541-cf3e-4a2a-afd4-05e9a11d0364-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 19:00:03 crc kubenswrapper[4768]: I0223 19:00:03.133011 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txrcx\" (UniqueName: \"kubernetes.io/projected/cb735541-cf3e-4a2a-afd4-05e9a11d0364-kube-api-access-txrcx\") on node \"crc\" DevicePath \"\"" Feb 23 19:00:03 crc kubenswrapper[4768]: I0223 19:00:03.133030 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb735541-cf3e-4a2a-afd4-05e9a11d0364-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 19:00:03 crc kubenswrapper[4768]: I0223 19:00:03.485324 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" event={"ID":"cb735541-cf3e-4a2a-afd4-05e9a11d0364","Type":"ContainerDied","Data":"27dea3e5e1dcce61dc39f22c3a976e5dd377902faabf235114b77877d83d154c"} Feb 23 19:00:03 crc kubenswrapper[4768]: I0223 19:00:03.485848 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27dea3e5e1dcce61dc39f22c3a976e5dd377902faabf235114b77877d83d154c" Feb 23 19:00:03 crc kubenswrapper[4768]: I0223 19:00:03.485487 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw" Feb 23 19:00:08 crc kubenswrapper[4768]: I0223 19:00:08.307563 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:00:08 crc kubenswrapper[4768]: E0223 19:00:08.308622 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:00:19 crc kubenswrapper[4768]: I0223 19:00:19.696324 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfbvn" event={"ID":"45a5d858-2600-4e56-92e5-84305552296e","Type":"ContainerStarted","Data":"0eee6122be8085e0264b98988453a4d018dc4cbe5e1fac0751cc6ff5c7feb6f3"} Feb 23 19:00:21 crc kubenswrapper[4768]: I0223 19:00:21.308937 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:00:21 crc kubenswrapper[4768]: E0223 19:00:21.311217 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:00:21 crc kubenswrapper[4768]: I0223 19:00:21.724922 4768 generic.go:334] "Generic (PLEG): container finished" podID="45a5d858-2600-4e56-92e5-84305552296e" containerID="0eee6122be8085e0264b98988453a4d018dc4cbe5e1fac0751cc6ff5c7feb6f3" exitCode=0 Feb 23 19:00:21 crc kubenswrapper[4768]: I0223 19:00:21.724991 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfbvn" event={"ID":"45a5d858-2600-4e56-92e5-84305552296e","Type":"ContainerDied","Data":"0eee6122be8085e0264b98988453a4d018dc4cbe5e1fac0751cc6ff5c7feb6f3"} Feb 23 19:00:22 crc kubenswrapper[4768]: I0223 19:00:22.737008 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfbvn" event={"ID":"45a5d858-2600-4e56-92e5-84305552296e","Type":"ContainerStarted","Data":"6c807b44c35a25b4c824bf15812c8aa5f4a8428ef6a59fbcea4b411093f902af"} Feb 23 19:00:22 crc kubenswrapper[4768]: I0223 19:00:22.786337 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dfbvn" podStartSLOduration=3.070381665 podStartE2EDuration="22.786025115s" podCreationTimestamp="2026-02-23 19:00:00 +0000 UTC" firstStartedPulling="2026-02-23 19:00:02.459665768 +0000 UTC m=+1597.850151558" lastFinishedPulling="2026-02-23 19:00:22.175309178 +0000 UTC m=+1617.565795008" observedRunningTime="2026-02-23 19:00:22.767919138 +0000 UTC m=+1618.158404958" watchObservedRunningTime="2026-02-23 19:00:22.786025115 +0000 UTC m=+1618.176510935" Feb 23 19:00:30 crc kubenswrapper[4768]: I0223 19:00:30.902058 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:30 crc kubenswrapper[4768]: I0223 19:00:30.902676 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:30 crc kubenswrapper[4768]: I0223 19:00:30.965379 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:31 crc kubenswrapper[4768]: I0223 19:00:31.908434 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:31 crc kubenswrapper[4768]: I0223 19:00:31.977822 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dfbvn"] Feb 23 19:00:32 crc kubenswrapper[4768]: I0223 19:00:32.052737 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-g998f"] Feb 23 19:00:32 crc kubenswrapper[4768]: I0223 19:00:32.064098 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-g998f"] Feb 23 19:00:32 crc kubenswrapper[4768]: I0223 19:00:32.308990 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:00:32 crc kubenswrapper[4768]: E0223 19:00:32.309936 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:00:33 crc kubenswrapper[4768]: I0223 19:00:33.329122 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="827f35c4-f9c8-4dea-8da7-a1ca6296b0f5" path="/var/lib/kubelet/pods/827f35c4-f9c8-4dea-8da7-a1ca6296b0f5/volumes" Feb 23 19:00:33 crc kubenswrapper[4768]: I0223 19:00:33.866418 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dfbvn" podUID="45a5d858-2600-4e56-92e5-84305552296e" containerName="registry-server" containerID="cri-o://6c807b44c35a25b4c824bf15812c8aa5f4a8428ef6a59fbcea4b411093f902af" gracePeriod=2 Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.379893 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.530275 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a5d858-2600-4e56-92e5-84305552296e-utilities\") pod \"45a5d858-2600-4e56-92e5-84305552296e\" (UID: \"45a5d858-2600-4e56-92e5-84305552296e\") " Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.530331 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tld4\" (UniqueName: \"kubernetes.io/projected/45a5d858-2600-4e56-92e5-84305552296e-kube-api-access-2tld4\") pod \"45a5d858-2600-4e56-92e5-84305552296e\" (UID: \"45a5d858-2600-4e56-92e5-84305552296e\") " Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.530381 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a5d858-2600-4e56-92e5-84305552296e-catalog-content\") pod \"45a5d858-2600-4e56-92e5-84305552296e\" (UID: \"45a5d858-2600-4e56-92e5-84305552296e\") " Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.531641 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a5d858-2600-4e56-92e5-84305552296e-utilities" (OuterVolumeSpecName: "utilities") pod "45a5d858-2600-4e56-92e5-84305552296e" (UID: "45a5d858-2600-4e56-92e5-84305552296e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.537315 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a5d858-2600-4e56-92e5-84305552296e-kube-api-access-2tld4" (OuterVolumeSpecName: "kube-api-access-2tld4") pod "45a5d858-2600-4e56-92e5-84305552296e" (UID: "45a5d858-2600-4e56-92e5-84305552296e"). InnerVolumeSpecName "kube-api-access-2tld4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.633459 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45a5d858-2600-4e56-92e5-84305552296e-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.633523 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tld4\" (UniqueName: \"kubernetes.io/projected/45a5d858-2600-4e56-92e5-84305552296e-kube-api-access-2tld4\") on node \"crc\" DevicePath \"\"" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.673132 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a5d858-2600-4e56-92e5-84305552296e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45a5d858-2600-4e56-92e5-84305552296e" (UID: "45a5d858-2600-4e56-92e5-84305552296e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.740101 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45a5d858-2600-4e56-92e5-84305552296e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.880809 4768 generic.go:334] "Generic (PLEG): container finished" podID="45a5d858-2600-4e56-92e5-84305552296e" containerID="6c807b44c35a25b4c824bf15812c8aa5f4a8428ef6a59fbcea4b411093f902af" exitCode=0 Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.880880 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfbvn" event={"ID":"45a5d858-2600-4e56-92e5-84305552296e","Type":"ContainerDied","Data":"6c807b44c35a25b4c824bf15812c8aa5f4a8428ef6a59fbcea4b411093f902af"} Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.880909 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dfbvn" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.880929 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dfbvn" event={"ID":"45a5d858-2600-4e56-92e5-84305552296e","Type":"ContainerDied","Data":"dd1048cbf8bb443ec30a62292d6976552c036890f60b7e4e6efe19914d67c3df"} Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.880959 4768 scope.go:117] "RemoveContainer" containerID="6c807b44c35a25b4c824bf15812c8aa5f4a8428ef6a59fbcea4b411093f902af" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.906802 4768 scope.go:117] "RemoveContainer" containerID="0eee6122be8085e0264b98988453a4d018dc4cbe5e1fac0751cc6ff5c7feb6f3" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.929009 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dfbvn"] Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.937063 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dfbvn"] Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.953515 4768 scope.go:117] "RemoveContainer" containerID="be9f02983f4525ef44eb18b28a053d7a9c81fb01a733f83a51b66374101ead25" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.981090 4768 scope.go:117] "RemoveContainer" containerID="6c807b44c35a25b4c824bf15812c8aa5f4a8428ef6a59fbcea4b411093f902af" Feb 23 19:00:34 crc kubenswrapper[4768]: E0223 19:00:34.981783 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c807b44c35a25b4c824bf15812c8aa5f4a8428ef6a59fbcea4b411093f902af\": container with ID starting with 6c807b44c35a25b4c824bf15812c8aa5f4a8428ef6a59fbcea4b411093f902af not found: ID does not exist" containerID="6c807b44c35a25b4c824bf15812c8aa5f4a8428ef6a59fbcea4b411093f902af" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.981844 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c807b44c35a25b4c824bf15812c8aa5f4a8428ef6a59fbcea4b411093f902af"} err="failed to get container status \"6c807b44c35a25b4c824bf15812c8aa5f4a8428ef6a59fbcea4b411093f902af\": rpc error: code = NotFound desc = could not find container \"6c807b44c35a25b4c824bf15812c8aa5f4a8428ef6a59fbcea4b411093f902af\": container with ID starting with 6c807b44c35a25b4c824bf15812c8aa5f4a8428ef6a59fbcea4b411093f902af not found: ID does not exist" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.981881 4768 scope.go:117] "RemoveContainer" containerID="0eee6122be8085e0264b98988453a4d018dc4cbe5e1fac0751cc6ff5c7feb6f3" Feb 23 19:00:34 crc kubenswrapper[4768]: E0223 19:00:34.982719 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eee6122be8085e0264b98988453a4d018dc4cbe5e1fac0751cc6ff5c7feb6f3\": container with ID starting with 0eee6122be8085e0264b98988453a4d018dc4cbe5e1fac0751cc6ff5c7feb6f3 not found: ID does not exist" containerID="0eee6122be8085e0264b98988453a4d018dc4cbe5e1fac0751cc6ff5c7feb6f3" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.982752 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eee6122be8085e0264b98988453a4d018dc4cbe5e1fac0751cc6ff5c7feb6f3"} err="failed to get container status \"0eee6122be8085e0264b98988453a4d018dc4cbe5e1fac0751cc6ff5c7feb6f3\": rpc error: code = NotFound desc = could not find container \"0eee6122be8085e0264b98988453a4d018dc4cbe5e1fac0751cc6ff5c7feb6f3\": container with ID starting with 0eee6122be8085e0264b98988453a4d018dc4cbe5e1fac0751cc6ff5c7feb6f3 not found: ID does not exist" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.982775 4768 scope.go:117] "RemoveContainer" containerID="be9f02983f4525ef44eb18b28a053d7a9c81fb01a733f83a51b66374101ead25" Feb 23 19:00:34 crc kubenswrapper[4768]: E0223 19:00:34.983518 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be9f02983f4525ef44eb18b28a053d7a9c81fb01a733f83a51b66374101ead25\": container with ID starting with be9f02983f4525ef44eb18b28a053d7a9c81fb01a733f83a51b66374101ead25 not found: ID does not exist" containerID="be9f02983f4525ef44eb18b28a053d7a9c81fb01a733f83a51b66374101ead25" Feb 23 19:00:34 crc kubenswrapper[4768]: I0223 19:00:34.983548 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9f02983f4525ef44eb18b28a053d7a9c81fb01a733f83a51b66374101ead25"} err="failed to get container status \"be9f02983f4525ef44eb18b28a053d7a9c81fb01a733f83a51b66374101ead25\": rpc error: code = NotFound desc = could not find container \"be9f02983f4525ef44eb18b28a053d7a9c81fb01a733f83a51b66374101ead25\": container with ID starting with be9f02983f4525ef44eb18b28a053d7a9c81fb01a733f83a51b66374101ead25 not found: ID does not exist" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.331648 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45a5d858-2600-4e56-92e5-84305552296e" path="/var/lib/kubelet/pods/45a5d858-2600-4e56-92e5-84305552296e/volumes" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.638395 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9g9qg"] Feb 23 19:00:35 crc kubenswrapper[4768]: E0223 19:00:35.639044 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a5d858-2600-4e56-92e5-84305552296e" containerName="registry-server" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.639064 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a5d858-2600-4e56-92e5-84305552296e" containerName="registry-server" Feb 23 19:00:35 crc kubenswrapper[4768]: E0223 19:00:35.639099 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb735541-cf3e-4a2a-afd4-05e9a11d0364" containerName="collect-profiles" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.639109 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb735541-cf3e-4a2a-afd4-05e9a11d0364" containerName="collect-profiles" Feb 23 19:00:35 crc kubenswrapper[4768]: E0223 19:00:35.639147 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a5d858-2600-4e56-92e5-84305552296e" containerName="extract-content" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.639156 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a5d858-2600-4e56-92e5-84305552296e" containerName="extract-content" Feb 23 19:00:35 crc kubenswrapper[4768]: E0223 19:00:35.639164 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a5d858-2600-4e56-92e5-84305552296e" containerName="extract-utilities" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.639170 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a5d858-2600-4e56-92e5-84305552296e" containerName="extract-utilities" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.639530 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb735541-cf3e-4a2a-afd4-05e9a11d0364" containerName="collect-profiles" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.639564 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a5d858-2600-4e56-92e5-84305552296e" containerName="registry-server" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.644652 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.651899 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g9qg"] Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.761492 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2s9r\" (UniqueName: \"kubernetes.io/projected/d3dc6918-ad7f-4500-a154-766b4d8c604e-kube-api-access-l2s9r\") pod \"redhat-marketplace-9g9qg\" (UID: \"d3dc6918-ad7f-4500-a154-766b4d8c604e\") " pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.761874 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3dc6918-ad7f-4500-a154-766b4d8c604e-catalog-content\") pod \"redhat-marketplace-9g9qg\" (UID: \"d3dc6918-ad7f-4500-a154-766b4d8c604e\") " pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.762002 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3dc6918-ad7f-4500-a154-766b4d8c604e-utilities\") pod \"redhat-marketplace-9g9qg\" (UID: \"d3dc6918-ad7f-4500-a154-766b4d8c604e\") " pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.864110 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2s9r\" (UniqueName: \"kubernetes.io/projected/d3dc6918-ad7f-4500-a154-766b4d8c604e-kube-api-access-l2s9r\") pod \"redhat-marketplace-9g9qg\" (UID: \"d3dc6918-ad7f-4500-a154-766b4d8c604e\") " pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.864164 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3dc6918-ad7f-4500-a154-766b4d8c604e-catalog-content\") pod \"redhat-marketplace-9g9qg\" (UID: \"d3dc6918-ad7f-4500-a154-766b4d8c604e\") " pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.864218 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3dc6918-ad7f-4500-a154-766b4d8c604e-utilities\") pod \"redhat-marketplace-9g9qg\" (UID: \"d3dc6918-ad7f-4500-a154-766b4d8c604e\") " pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.864767 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3dc6918-ad7f-4500-a154-766b4d8c604e-utilities\") pod \"redhat-marketplace-9g9qg\" (UID: \"d3dc6918-ad7f-4500-a154-766b4d8c604e\") " pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.865328 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3dc6918-ad7f-4500-a154-766b4d8c604e-catalog-content\") pod \"redhat-marketplace-9g9qg\" (UID: \"d3dc6918-ad7f-4500-a154-766b4d8c604e\") " pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.886524 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2s9r\" (UniqueName: \"kubernetes.io/projected/d3dc6918-ad7f-4500-a154-766b4d8c604e-kube-api-access-l2s9r\") pod \"redhat-marketplace-9g9qg\" (UID: \"d3dc6918-ad7f-4500-a154-766b4d8c604e\") " pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:35 crc kubenswrapper[4768]: I0223 19:00:35.983645 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:36 crc kubenswrapper[4768]: I0223 19:00:36.490895 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g9qg"] Feb 23 19:00:36 crc kubenswrapper[4768]: I0223 19:00:36.924189 4768 generic.go:334] "Generic (PLEG): container finished" podID="d3dc6918-ad7f-4500-a154-766b4d8c604e" containerID="d55c441ec92b2ba250cd34eb8a50dd30fda29b5b618baaba1519e92cc01c2e38" exitCode=0 Feb 23 19:00:36 crc kubenswrapper[4768]: I0223 19:00:36.924340 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9qg" event={"ID":"d3dc6918-ad7f-4500-a154-766b4d8c604e","Type":"ContainerDied","Data":"d55c441ec92b2ba250cd34eb8a50dd30fda29b5b618baaba1519e92cc01c2e38"} Feb 23 19:00:36 crc kubenswrapper[4768]: I0223 19:00:36.924648 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9qg" event={"ID":"d3dc6918-ad7f-4500-a154-766b4d8c604e","Type":"ContainerStarted","Data":"3354d4ff668f663ec7f81e3a086a80a891e25eab0bc617e8d7651afec6eb0071"} Feb 23 19:00:37 crc kubenswrapper[4768]: I0223 19:00:37.940786 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9qg" event={"ID":"d3dc6918-ad7f-4500-a154-766b4d8c604e","Type":"ContainerStarted","Data":"6a88d39e80662c5e05883c21ca20c7f9eb5e935a76f8c30f6954aebeefbb4cdd"} Feb 23 19:00:38 crc kubenswrapper[4768]: I0223 19:00:38.958499 4768 generic.go:334] "Generic (PLEG): container finished" podID="d3dc6918-ad7f-4500-a154-766b4d8c604e" containerID="6a88d39e80662c5e05883c21ca20c7f9eb5e935a76f8c30f6954aebeefbb4cdd" exitCode=0 Feb 23 19:00:38 crc kubenswrapper[4768]: I0223 19:00:38.958559 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9qg" event={"ID":"d3dc6918-ad7f-4500-a154-766b4d8c604e","Type":"ContainerDied","Data":"6a88d39e80662c5e05883c21ca20c7f9eb5e935a76f8c30f6954aebeefbb4cdd"} Feb 23 19:00:39 crc kubenswrapper[4768]: I0223 19:00:39.068964 4768 scope.go:117] "RemoveContainer" containerID="dc0f3d5faad33c49d050477fa8cafeb7f2419b4f3e81143cb6020cadff877def" Feb 23 19:00:39 crc kubenswrapper[4768]: I0223 19:00:39.116162 4768 scope.go:117] "RemoveContainer" containerID="3a86350df92e3452a365a4c07e3d30237200dcc26fbe6d1785ea9447976bab99" Feb 23 19:00:39 crc kubenswrapper[4768]: I0223 19:00:39.170211 4768 scope.go:117] "RemoveContainer" containerID="e83aa32b83d68c92344f8eba9aa0d5828014a1e082b665e1d8359a6873b1ea56" Feb 23 19:00:39 crc kubenswrapper[4768]: I0223 19:00:39.264660 4768 scope.go:117] "RemoveContainer" containerID="7be0bee3f167d6086a636b359d7101c8428b5f2cf0b31976319d9c36ebd5eef1" Feb 23 19:00:39 crc kubenswrapper[4768]: I0223 19:00:39.288685 4768 scope.go:117] "RemoveContainer" containerID="ea16179cae17c36e9b4acb8220a5ba5d4a17774265e42c892beb0070e4ee8ded" Feb 23 19:00:39 crc kubenswrapper[4768]: I0223 19:00:39.350621 4768 scope.go:117] "RemoveContainer" containerID="9de828e05cb8f4c10f2cd56f9df5d04f16f6b9c8a5b0b6810a8d2713efe6fc34" Feb 23 19:00:39 crc kubenswrapper[4768]: I0223 19:00:39.403028 4768 scope.go:117] "RemoveContainer" containerID="a6cd6bf00a3122d76367a74eb472032860570b39e49199df5e7d824b059baab4" Feb 23 19:00:39 crc kubenswrapper[4768]: I0223 19:00:39.445750 4768 scope.go:117] "RemoveContainer" containerID="64f28df03ba902db00b2ee197556ce4b38ff850a4d8f7b9785597c2fff956a9f" Feb 23 19:00:39 crc kubenswrapper[4768]: I0223 19:00:39.508422 4768 scope.go:117] "RemoveContainer" containerID="dfa2ccbe7828074aa2f65589ee7290d54da92d2f07a5bd8c8e8f4d4d781323b9" Feb 23 19:00:39 crc kubenswrapper[4768]: I0223 19:00:39.537207 4768 scope.go:117] "RemoveContainer" containerID="e2e48ee46ef399153874e6c41c4fd558d4c91072ebe81da5c7a5af5671ac9490" Feb 23 19:00:39 crc kubenswrapper[4768]: I0223 19:00:39.976698 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9qg" event={"ID":"d3dc6918-ad7f-4500-a154-766b4d8c604e","Type":"ContainerStarted","Data":"f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c"} Feb 23 19:00:40 crc kubenswrapper[4768]: I0223 19:00:40.007714 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9g9qg" podStartSLOduration=2.517307155 podStartE2EDuration="5.007687431s" podCreationTimestamp="2026-02-23 19:00:35 +0000 UTC" firstStartedPulling="2026-02-23 19:00:36.926614858 +0000 UTC m=+1632.317100698" lastFinishedPulling="2026-02-23 19:00:39.416995164 +0000 UTC m=+1634.807480974" observedRunningTime="2026-02-23 19:00:40.002093793 +0000 UTC m=+1635.392579593" watchObservedRunningTime="2026-02-23 19:00:40.007687431 +0000 UTC m=+1635.398173221" Feb 23 19:00:44 crc kubenswrapper[4768]: I0223 19:00:44.310189 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:00:44 crc kubenswrapper[4768]: E0223 19:00:44.311089 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:00:45 crc kubenswrapper[4768]: I0223 19:00:45.984535 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:45 crc kubenswrapper[4768]: I0223 19:00:45.984589 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:46 crc kubenswrapper[4768]: I0223 19:00:46.056897 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:47 crc kubenswrapper[4768]: I0223 19:00:47.117886 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:47 crc kubenswrapper[4768]: I0223 19:00:47.207095 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g9qg"] Feb 23 19:00:48 crc kubenswrapper[4768]: I0223 19:00:48.044698 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vnlkg"] Feb 23 19:00:48 crc kubenswrapper[4768]: I0223 19:00:48.088768 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-n58l7"] Feb 23 19:00:48 crc kubenswrapper[4768]: I0223 19:00:48.112340 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vnlkg"] Feb 23 19:00:48 crc kubenswrapper[4768]: I0223 19:00:48.121209 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-n58l7"] Feb 23 19:00:49 crc kubenswrapper[4768]: I0223 19:00:49.140433 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9g9qg" podUID="d3dc6918-ad7f-4500-a154-766b4d8c604e" containerName="registry-server" containerID="cri-o://f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c" gracePeriod=2 Feb 23 19:00:49 crc kubenswrapper[4768]: I0223 19:00:49.319176 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5f03e9-0a62-4567-93d2-5abbb7b89219" path="/var/lib/kubelet/pods/6f5f03e9-0a62-4567-93d2-5abbb7b89219/volumes" Feb 23 19:00:49 crc kubenswrapper[4768]: I0223 19:00:49.321492 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd2ba036-bbca-4b94-8f72-70e252e5a2b9" path="/var/lib/kubelet/pods/cd2ba036-bbca-4b94-8f72-70e252e5a2b9/volumes" Feb 23 19:00:49 crc kubenswrapper[4768]: E0223 19:00:49.452634 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3dc6918_ad7f_4500_a154_766b4d8c604e.slice/crio-f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3dc6918_ad7f_4500_a154_766b4d8c604e.slice/crio-conmon-f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c.scope\": RecentStats: unable to find data in memory cache]" Feb 23 19:00:49 crc kubenswrapper[4768]: I0223 19:00:49.607400 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:49 crc kubenswrapper[4768]: I0223 19:00:49.744968 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2s9r\" (UniqueName: \"kubernetes.io/projected/d3dc6918-ad7f-4500-a154-766b4d8c604e-kube-api-access-l2s9r\") pod \"d3dc6918-ad7f-4500-a154-766b4d8c604e\" (UID: \"d3dc6918-ad7f-4500-a154-766b4d8c604e\") " Feb 23 19:00:49 crc kubenswrapper[4768]: I0223 19:00:49.745093 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3dc6918-ad7f-4500-a154-766b4d8c604e-utilities\") pod \"d3dc6918-ad7f-4500-a154-766b4d8c604e\" (UID: \"d3dc6918-ad7f-4500-a154-766b4d8c604e\") " Feb 23 19:00:49 crc kubenswrapper[4768]: I0223 19:00:49.745128 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3dc6918-ad7f-4500-a154-766b4d8c604e-catalog-content\") pod \"d3dc6918-ad7f-4500-a154-766b4d8c604e\" (UID: \"d3dc6918-ad7f-4500-a154-766b4d8c604e\") " Feb 23 19:00:49 crc kubenswrapper[4768]: I0223 19:00:49.746167 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3dc6918-ad7f-4500-a154-766b4d8c604e-utilities" (OuterVolumeSpecName: "utilities") pod "d3dc6918-ad7f-4500-a154-766b4d8c604e" (UID: "d3dc6918-ad7f-4500-a154-766b4d8c604e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:00:49 crc kubenswrapper[4768]: I0223 19:00:49.752091 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3dc6918-ad7f-4500-a154-766b4d8c604e-kube-api-access-l2s9r" (OuterVolumeSpecName: "kube-api-access-l2s9r") pod "d3dc6918-ad7f-4500-a154-766b4d8c604e" (UID: "d3dc6918-ad7f-4500-a154-766b4d8c604e"). InnerVolumeSpecName "kube-api-access-l2s9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:00:49 crc kubenswrapper[4768]: I0223 19:00:49.783280 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3dc6918-ad7f-4500-a154-766b4d8c604e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3dc6918-ad7f-4500-a154-766b4d8c604e" (UID: "d3dc6918-ad7f-4500-a154-766b4d8c604e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:00:49 crc kubenswrapper[4768]: I0223 19:00:49.848079 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2s9r\" (UniqueName: \"kubernetes.io/projected/d3dc6918-ad7f-4500-a154-766b4d8c604e-kube-api-access-l2s9r\") on node \"crc\" DevicePath \"\"" Feb 23 19:00:49 crc kubenswrapper[4768]: I0223 19:00:49.848117 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3dc6918-ad7f-4500-a154-766b4d8c604e-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:00:49 crc kubenswrapper[4768]: I0223 19:00:49.848127 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3dc6918-ad7f-4500-a154-766b4d8c604e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.151429 4768 generic.go:334] "Generic (PLEG): container finished" podID="d3dc6918-ad7f-4500-a154-766b4d8c604e" containerID="f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c" exitCode=0 Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.154578 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9g9qg" Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.160284 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9qg" event={"ID":"d3dc6918-ad7f-4500-a154-766b4d8c604e","Type":"ContainerDied","Data":"f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c"} Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.160398 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9g9qg" event={"ID":"d3dc6918-ad7f-4500-a154-766b4d8c604e","Type":"ContainerDied","Data":"3354d4ff668f663ec7f81e3a086a80a891e25eab0bc617e8d7651afec6eb0071"} Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.160422 4768 scope.go:117] "RemoveContainer" containerID="f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c" Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.202775 4768 scope.go:117] "RemoveContainer" containerID="6a88d39e80662c5e05883c21ca20c7f9eb5e935a76f8c30f6954aebeefbb4cdd" Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.226277 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g9qg"] Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.236794 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9g9qg"] Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.247773 4768 scope.go:117] "RemoveContainer" containerID="d55c441ec92b2ba250cd34eb8a50dd30fda29b5b618baaba1519e92cc01c2e38" Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.324875 4768 scope.go:117] "RemoveContainer" containerID="f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c" Feb 23 19:00:50 crc kubenswrapper[4768]: E0223 19:00:50.325443 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c\": container with ID starting with f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c not found: ID does not exist" containerID="f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c" Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.325503 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c"} err="failed to get container status \"f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c\": rpc error: code = NotFound desc = could not find container \"f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c\": container with ID starting with f3db661c44dc28b80b53bdfb88cb89f12d92b916cdeb17f5b503537fe4e3281c not found: ID does not exist" Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.325542 4768 scope.go:117] "RemoveContainer" containerID="6a88d39e80662c5e05883c21ca20c7f9eb5e935a76f8c30f6954aebeefbb4cdd" Feb 23 19:00:50 crc kubenswrapper[4768]: E0223 19:00:50.327193 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a88d39e80662c5e05883c21ca20c7f9eb5e935a76f8c30f6954aebeefbb4cdd\": container with ID starting with 6a88d39e80662c5e05883c21ca20c7f9eb5e935a76f8c30f6954aebeefbb4cdd not found: ID does not exist" containerID="6a88d39e80662c5e05883c21ca20c7f9eb5e935a76f8c30f6954aebeefbb4cdd" Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.327223 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a88d39e80662c5e05883c21ca20c7f9eb5e935a76f8c30f6954aebeefbb4cdd"} err="failed to get container status \"6a88d39e80662c5e05883c21ca20c7f9eb5e935a76f8c30f6954aebeefbb4cdd\": rpc error: code = NotFound desc = could not find container \"6a88d39e80662c5e05883c21ca20c7f9eb5e935a76f8c30f6954aebeefbb4cdd\": container with ID starting with 6a88d39e80662c5e05883c21ca20c7f9eb5e935a76f8c30f6954aebeefbb4cdd not found: ID does not exist" Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.327241 4768 scope.go:117] "RemoveContainer" containerID="d55c441ec92b2ba250cd34eb8a50dd30fda29b5b618baaba1519e92cc01c2e38" Feb 23 19:00:50 crc kubenswrapper[4768]: E0223 19:00:50.327572 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d55c441ec92b2ba250cd34eb8a50dd30fda29b5b618baaba1519e92cc01c2e38\": container with ID starting with d55c441ec92b2ba250cd34eb8a50dd30fda29b5b618baaba1519e92cc01c2e38 not found: ID does not exist" containerID="d55c441ec92b2ba250cd34eb8a50dd30fda29b5b618baaba1519e92cc01c2e38" Feb 23 19:00:50 crc kubenswrapper[4768]: I0223 19:00:50.327595 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55c441ec92b2ba250cd34eb8a50dd30fda29b5b618baaba1519e92cc01c2e38"} err="failed to get container status \"d55c441ec92b2ba250cd34eb8a50dd30fda29b5b618baaba1519e92cc01c2e38\": rpc error: code = NotFound desc = could not find container \"d55c441ec92b2ba250cd34eb8a50dd30fda29b5b618baaba1519e92cc01c2e38\": container with ID starting with d55c441ec92b2ba250cd34eb8a50dd30fda29b5b618baaba1519e92cc01c2e38 not found: ID does not exist" Feb 23 19:00:51 crc kubenswrapper[4768]: I0223 19:00:51.322799 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3dc6918-ad7f-4500-a154-766b4d8c604e" path="/var/lib/kubelet/pods/d3dc6918-ad7f-4500-a154-766b4d8c604e/volumes" Feb 23 19:00:56 crc kubenswrapper[4768]: I0223 19:00:56.309111 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:00:56 crc kubenswrapper[4768]: E0223 19:00:56.309908 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:00:59 crc kubenswrapper[4768]: I0223 19:00:59.063657 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-zv7fq"] Feb 23 19:00:59 crc kubenswrapper[4768]: I0223 19:00:59.074952 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-zv7fq"] Feb 23 19:00:59 crc kubenswrapper[4768]: I0223 19:00:59.329747 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d689e8c1-2c72-4fe1-890c-ba586628dd4b" path="/var/lib/kubelet/pods/d689e8c1-2c72-4fe1-890c-ba586628dd4b/volumes" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.040427 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-hcnm6"] Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.051222 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-hcnm6"] Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.153443 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29531221-9gvxw"] Feb 23 19:01:00 crc kubenswrapper[4768]: E0223 19:01:00.154852 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3dc6918-ad7f-4500-a154-766b4d8c604e" containerName="extract-content" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.154943 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3dc6918-ad7f-4500-a154-766b4d8c604e" containerName="extract-content" Feb 23 19:01:00 crc kubenswrapper[4768]: E0223 19:01:00.155111 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3dc6918-ad7f-4500-a154-766b4d8c604e" containerName="extract-utilities" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.155170 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3dc6918-ad7f-4500-a154-766b4d8c604e" containerName="extract-utilities" Feb 23 19:01:00 crc kubenswrapper[4768]: E0223 19:01:00.155240 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3dc6918-ad7f-4500-a154-766b4d8c604e" containerName="registry-server" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.155317 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3dc6918-ad7f-4500-a154-766b4d8c604e" containerName="registry-server" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.155613 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3dc6918-ad7f-4500-a154-766b4d8c604e" containerName="registry-server" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.156426 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.180064 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29531221-9gvxw"] Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.338204 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-combined-ca-bundle\") pod \"keystone-cron-29531221-9gvxw\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.338494 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-fernet-keys\") pod \"keystone-cron-29531221-9gvxw\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.338552 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f2wc\" (UniqueName: \"kubernetes.io/projected/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-kube-api-access-8f2wc\") pod \"keystone-cron-29531221-9gvxw\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.338632 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-config-data\") pod \"keystone-cron-29531221-9gvxw\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.440786 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-fernet-keys\") pod \"keystone-cron-29531221-9gvxw\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.443246 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f2wc\" (UniqueName: \"kubernetes.io/projected/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-kube-api-access-8f2wc\") pod \"keystone-cron-29531221-9gvxw\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.443442 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-config-data\") pod \"keystone-cron-29531221-9gvxw\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.443631 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-combined-ca-bundle\") pod \"keystone-cron-29531221-9gvxw\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.452620 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-combined-ca-bundle\") pod \"keystone-cron-29531221-9gvxw\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.453463 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-fernet-keys\") pod \"keystone-cron-29531221-9gvxw\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.465239 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-config-data\") pod \"keystone-cron-29531221-9gvxw\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.481350 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f2wc\" (UniqueName: \"kubernetes.io/projected/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-kube-api-access-8f2wc\") pod \"keystone-cron-29531221-9gvxw\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:00 crc kubenswrapper[4768]: I0223 19:01:00.496821 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:01 crc kubenswrapper[4768]: I0223 19:01:01.034164 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29531221-9gvxw"] Feb 23 19:01:01 crc kubenswrapper[4768]: I0223 19:01:01.304106 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531221-9gvxw" event={"ID":"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0","Type":"ContainerStarted","Data":"1e96bfc0730260483c47efb9b4422a31ae9f711adbd7b1b7cd93e06864c37cd7"} Feb 23 19:01:01 crc kubenswrapper[4768]: I0223 19:01:01.304814 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531221-9gvxw" event={"ID":"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0","Type":"ContainerStarted","Data":"b0e9be94f855b279f0cd666914ae7fa37f2d3745a4612e52a4398aa316ee9968"} Feb 23 19:01:01 crc kubenswrapper[4768]: I0223 19:01:01.334168 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6df03b-46d7-4b9e-a9cd-949eca9bf718" path="/var/lib/kubelet/pods/6f6df03b-46d7-4b9e-a9cd-949eca9bf718/volumes" Feb 23 19:01:01 crc kubenswrapper[4768]: I0223 19:01:01.341221 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29531221-9gvxw" podStartSLOduration=1.341205087 podStartE2EDuration="1.341205087s" podCreationTimestamp="2026-02-23 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 19:01:01.337950091 +0000 UTC m=+1656.728435931" watchObservedRunningTime="2026-02-23 19:01:01.341205087 +0000 UTC m=+1656.731690897" Feb 23 19:01:03 crc kubenswrapper[4768]: I0223 19:01:03.338015 4768 generic.go:334] "Generic (PLEG): container finished" podID="2f06a77a-756a-4cc8-9cea-c6c0da57bfd0" containerID="1e96bfc0730260483c47efb9b4422a31ae9f711adbd7b1b7cd93e06864c37cd7" exitCode=0 Feb 23 19:01:03 crc kubenswrapper[4768]: I0223 19:01:03.338237 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531221-9gvxw" event={"ID":"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0","Type":"ContainerDied","Data":"1e96bfc0730260483c47efb9b4422a31ae9f711adbd7b1b7cd93e06864c37cd7"} Feb 23 19:01:04 crc kubenswrapper[4768]: I0223 19:01:04.799484 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:04 crc kubenswrapper[4768]: I0223 19:01:04.965170 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-fernet-keys\") pod \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " Feb 23 19:01:04 crc kubenswrapper[4768]: I0223 19:01:04.965336 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-combined-ca-bundle\") pod \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " Feb 23 19:01:04 crc kubenswrapper[4768]: I0223 19:01:04.965369 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f2wc\" (UniqueName: \"kubernetes.io/projected/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-kube-api-access-8f2wc\") pod \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " Feb 23 19:01:04 crc kubenswrapper[4768]: I0223 19:01:04.965463 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-config-data\") pod \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\" (UID: \"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0\") " Feb 23 19:01:04 crc kubenswrapper[4768]: I0223 19:01:04.975793 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2f06a77a-756a-4cc8-9cea-c6c0da57bfd0" (UID: "2f06a77a-756a-4cc8-9cea-c6c0da57bfd0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:01:04 crc kubenswrapper[4768]: I0223 19:01:04.976231 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-kube-api-access-8f2wc" (OuterVolumeSpecName: "kube-api-access-8f2wc") pod "2f06a77a-756a-4cc8-9cea-c6c0da57bfd0" (UID: "2f06a77a-756a-4cc8-9cea-c6c0da57bfd0"). InnerVolumeSpecName "kube-api-access-8f2wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:01:05 crc kubenswrapper[4768]: I0223 19:01:05.018406 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f06a77a-756a-4cc8-9cea-c6c0da57bfd0" (UID: "2f06a77a-756a-4cc8-9cea-c6c0da57bfd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:01:05 crc kubenswrapper[4768]: I0223 19:01:05.054361 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-config-data" (OuterVolumeSpecName: "config-data") pod "2f06a77a-756a-4cc8-9cea-c6c0da57bfd0" (UID: "2f06a77a-756a-4cc8-9cea-c6c0da57bfd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:01:05 crc kubenswrapper[4768]: I0223 19:01:05.068024 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:05 crc kubenswrapper[4768]: I0223 19:01:05.068067 4768 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:05 crc kubenswrapper[4768]: I0223 19:01:05.068080 4768 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:05 crc kubenswrapper[4768]: I0223 19:01:05.068098 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f2wc\" (UniqueName: \"kubernetes.io/projected/2f06a77a-756a-4cc8-9cea-c6c0da57bfd0-kube-api-access-8f2wc\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:05 crc kubenswrapper[4768]: I0223 19:01:05.368148 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29531221-9gvxw" event={"ID":"2f06a77a-756a-4cc8-9cea-c6c0da57bfd0","Type":"ContainerDied","Data":"b0e9be94f855b279f0cd666914ae7fa37f2d3745a4612e52a4398aa316ee9968"} Feb 23 19:01:05 crc kubenswrapper[4768]: I0223 19:01:05.368474 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0e9be94f855b279f0cd666914ae7fa37f2d3745a4612e52a4398aa316ee9968" Feb 23 19:01:05 crc kubenswrapper[4768]: I0223 19:01:05.368294 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29531221-9gvxw" Feb 23 19:01:11 crc kubenswrapper[4768]: I0223 19:01:11.308979 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:01:11 crc kubenswrapper[4768]: E0223 19:01:11.310538 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:01:23 crc kubenswrapper[4768]: I0223 19:01:23.309086 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:01:23 crc kubenswrapper[4768]: E0223 19:01:23.310809 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:01:26 crc kubenswrapper[4768]: I0223 19:01:26.625031 4768 generic.go:334] "Generic (PLEG): container finished" podID="964d25fb-0600-4332-9f40-85f700d35088" containerID="263ecee1fc741ae53e999634807a4e9f8652c15e02b0178a6a23c3ac1451ccb0" exitCode=0 Feb 23 19:01:26 crc kubenswrapper[4768]: I0223 19:01:26.625491 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" event={"ID":"964d25fb-0600-4332-9f40-85f700d35088","Type":"ContainerDied","Data":"263ecee1fc741ae53e999634807a4e9f8652c15e02b0178a6a23c3ac1451ccb0"} Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.079495 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.220266 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/964d25fb-0600-4332-9f40-85f700d35088-ssh-key-openstack-edpm-ipam\") pod \"964d25fb-0600-4332-9f40-85f700d35088\" (UID: \"964d25fb-0600-4332-9f40-85f700d35088\") " Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.220393 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/964d25fb-0600-4332-9f40-85f700d35088-inventory\") pod \"964d25fb-0600-4332-9f40-85f700d35088\" (UID: \"964d25fb-0600-4332-9f40-85f700d35088\") " Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.220585 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qftql\" (UniqueName: \"kubernetes.io/projected/964d25fb-0600-4332-9f40-85f700d35088-kube-api-access-qftql\") pod \"964d25fb-0600-4332-9f40-85f700d35088\" (UID: \"964d25fb-0600-4332-9f40-85f700d35088\") " Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.235515 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/964d25fb-0600-4332-9f40-85f700d35088-kube-api-access-qftql" (OuterVolumeSpecName: "kube-api-access-qftql") pod "964d25fb-0600-4332-9f40-85f700d35088" (UID: "964d25fb-0600-4332-9f40-85f700d35088"). InnerVolumeSpecName "kube-api-access-qftql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.274450 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/964d25fb-0600-4332-9f40-85f700d35088-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "964d25fb-0600-4332-9f40-85f700d35088" (UID: "964d25fb-0600-4332-9f40-85f700d35088"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.316454 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/964d25fb-0600-4332-9f40-85f700d35088-inventory" (OuterVolumeSpecName: "inventory") pod "964d25fb-0600-4332-9f40-85f700d35088" (UID: "964d25fb-0600-4332-9f40-85f700d35088"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.324784 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/964d25fb-0600-4332-9f40-85f700d35088-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.324823 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/964d25fb-0600-4332-9f40-85f700d35088-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.324836 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qftql\" (UniqueName: \"kubernetes.io/projected/964d25fb-0600-4332-9f40-85f700d35088-kube-api-access-qftql\") on node \"crc\" DevicePath \"\"" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.647752 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" event={"ID":"964d25fb-0600-4332-9f40-85f700d35088","Type":"ContainerDied","Data":"47db2a5435e7ef333cb34cb9bac7d47c20379106a0879ce89af26c27889e8de3"} Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.647824 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47db2a5435e7ef333cb34cb9bac7d47c20379106a0879ce89af26c27889e8de3" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.647853 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.754915 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq"] Feb 23 19:01:28 crc kubenswrapper[4768]: E0223 19:01:28.755395 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="964d25fb-0600-4332-9f40-85f700d35088" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.755415 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="964d25fb-0600-4332-9f40-85f700d35088" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 19:01:28 crc kubenswrapper[4768]: E0223 19:01:28.755437 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f06a77a-756a-4cc8-9cea-c6c0da57bfd0" containerName="keystone-cron" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.755445 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f06a77a-756a-4cc8-9cea-c6c0da57bfd0" containerName="keystone-cron" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.755626 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f06a77a-756a-4cc8-9cea-c6c0da57bfd0" containerName="keystone-cron" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.755657 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="964d25fb-0600-4332-9f40-85f700d35088" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.756327 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.759432 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.760223 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.761470 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.762234 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.776622 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq"] Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.939420 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq\" (UID: \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.939971 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gkrn\" (UniqueName: \"kubernetes.io/projected/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-kube-api-access-8gkrn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq\" (UID: \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" Feb 23 19:01:28 crc kubenswrapper[4768]: I0223 19:01:28.940083 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq\" (UID: \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" Feb 23 19:01:29 crc kubenswrapper[4768]: I0223 19:01:29.042678 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq\" (UID: \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" Feb 23 19:01:29 crc kubenswrapper[4768]: I0223 19:01:29.043040 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gkrn\" (UniqueName: \"kubernetes.io/projected/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-kube-api-access-8gkrn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq\" (UID: \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" Feb 23 19:01:29 crc kubenswrapper[4768]: I0223 19:01:29.043225 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq\" (UID: \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" Feb 23 19:01:29 crc kubenswrapper[4768]: I0223 19:01:29.047503 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq\" (UID: \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" Feb 23 19:01:29 crc kubenswrapper[4768]: I0223 19:01:29.047834 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq\" (UID: \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" Feb 23 19:01:29 crc kubenswrapper[4768]: I0223 19:01:29.068125 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gkrn\" (UniqueName: \"kubernetes.io/projected/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-kube-api-access-8gkrn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq\" (UID: \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" Feb 23 19:01:29 crc kubenswrapper[4768]: I0223 19:01:29.073463 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" Feb 23 19:01:29 crc kubenswrapper[4768]: I0223 19:01:29.720092 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq"] Feb 23 19:01:30 crc kubenswrapper[4768]: I0223 19:01:30.675724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" event={"ID":"fd5b2e52-1d19-459a-ae2f-a78b5a7df018","Type":"ContainerStarted","Data":"283d502a3d3dd558fda6eba7c0231638d4499e19d72b5c296629eb7eb68f015e"} Feb 23 19:01:30 crc kubenswrapper[4768]: I0223 19:01:30.677161 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" event={"ID":"fd5b2e52-1d19-459a-ae2f-a78b5a7df018","Type":"ContainerStarted","Data":"d1b3fdbeef57cee8b2b5a730c1af1d0568ea43cd08bb3553f7eb57d110b4c1a7"} Feb 23 19:01:30 crc kubenswrapper[4768]: I0223 19:01:30.700060 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" podStartSLOduration=2.308185423 podStartE2EDuration="2.70002935s" podCreationTimestamp="2026-02-23 19:01:28 +0000 UTC" firstStartedPulling="2026-02-23 19:01:29.721942514 +0000 UTC m=+1685.112428314" lastFinishedPulling="2026-02-23 19:01:30.113786391 +0000 UTC m=+1685.504272241" observedRunningTime="2026-02-23 19:01:30.69360016 +0000 UTC m=+1686.084086000" watchObservedRunningTime="2026-02-23 19:01:30.70002935 +0000 UTC m=+1686.090515180" Feb 23 19:01:35 crc kubenswrapper[4768]: I0223 19:01:35.317592 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:01:35 crc kubenswrapper[4768]: E0223 19:01:35.318411 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.056728 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-lg5zn"] Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.068075 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-250a-account-create-update-q99n5"] Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.079328 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-b9bx5"] Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.108728 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8a6b-account-create-update-stzln"] Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.126229 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-x4hdd"] Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.146457 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a8c9-account-create-update-vhvc2"] Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.157469 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-lg5zn"] Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.166387 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-b9bx5"] Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.180786 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8a6b-account-create-update-stzln"] Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.191607 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a8c9-account-create-update-vhvc2"] Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.201821 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-250a-account-create-update-q99n5"] Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.212507 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-x4hdd"] Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.325505 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124ee684-6570-4e6c-856b-516e1b2f793a" path="/var/lib/kubelet/pods/124ee684-6570-4e6c-856b-516e1b2f793a/volumes" Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.327215 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678921d0-cd54-4104-afdd-e6a47489b0e3" path="/var/lib/kubelet/pods/678921d0-cd54-4104-afdd-e6a47489b0e3/volumes" Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.328372 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e834410-f86e-424f-81ac-73de81ffeb25" path="/var/lib/kubelet/pods/7e834410-f86e-424f-81ac-73de81ffeb25/volumes" Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.329684 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c8735b-69ac-497c-8c20-08580587d926" path="/var/lib/kubelet/pods/92c8735b-69ac-497c-8c20-08580587d926/volumes" Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.332100 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8148745-3469-4ca2-a2dd-bc459d1b5eb7" path="/var/lib/kubelet/pods/a8148745-3469-4ca2-a2dd-bc459d1b5eb7/volumes" Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.333419 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd539a7e-17cc-4c2a-a066-fecd85ee2261" path="/var/lib/kubelet/pods/dd539a7e-17cc-4c2a-a066-fecd85ee2261/volumes" Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.870324 4768 scope.go:117] "RemoveContainer" containerID="6e2a9a01e2373d545c697c8ecbf49389affe09926d20b721eab252697fa75b48" Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.898158 4768 scope.go:117] "RemoveContainer" containerID="6f2ab964c3605681eacec273cc5ac72a134a8dd059ac2635eca48e192b509100" Feb 23 19:01:39 crc kubenswrapper[4768]: I0223 19:01:39.971187 4768 scope.go:117] "RemoveContainer" containerID="25845eb73b91af06a482306cbbbf8bde0c72180f7b997a4615f2608c37e93607" Feb 23 19:01:40 crc kubenswrapper[4768]: I0223 19:01:40.051372 4768 scope.go:117] "RemoveContainer" containerID="2701fc3cd55d0ab15a1e723319da8293ded5ed86fee531366911061601abd1a9" Feb 23 19:01:40 crc kubenswrapper[4768]: I0223 19:01:40.100357 4768 scope.go:117] "RemoveContainer" containerID="96e5b527a04a8c17b4de12ae091944d1fbeb89be1a996a9621eaffc7ba3a3783" Feb 23 19:01:40 crc kubenswrapper[4768]: I0223 19:01:40.161847 4768 scope.go:117] "RemoveContainer" containerID="4884ef943d4fbca17aa68e175d80de9e8f4e32368167654f49b4b864d3ac8008" Feb 23 19:01:40 crc kubenswrapper[4768]: I0223 19:01:40.202304 4768 scope.go:117] "RemoveContainer" containerID="d224c7cdfadcf1640ca1baf851c28c8981c4a22adee2a39159a4cb0ad408cef6" Feb 23 19:01:40 crc kubenswrapper[4768]: I0223 19:01:40.226354 4768 scope.go:117] "RemoveContainer" containerID="98aebede44299fee775fd2b2371373a24ef04409aeb5042213d336a34d8b7012" Feb 23 19:01:40 crc kubenswrapper[4768]: I0223 19:01:40.290573 4768 scope.go:117] "RemoveContainer" containerID="1654e674e595c8dbe8a19648fa9dfbd91bd5a475b5d43e64650b9e8dfe99478a" Feb 23 19:01:40 crc kubenswrapper[4768]: I0223 19:01:40.340735 4768 scope.go:117] "RemoveContainer" containerID="3b40f18aa1b8f59f4050e4daa0594072afca963b69b76b7c2818b7919e7be8b9" Feb 23 19:01:46 crc kubenswrapper[4768]: I0223 19:01:46.309443 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:01:46 crc kubenswrapper[4768]: E0223 19:01:46.310468 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:01:57 crc kubenswrapper[4768]: I0223 19:01:57.308711 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:01:57 crc kubenswrapper[4768]: E0223 19:01:57.310464 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:02:06 crc kubenswrapper[4768]: I0223 19:02:06.292586 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rqlzt"] Feb 23 19:02:06 crc kubenswrapper[4768]: I0223 19:02:06.303229 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:06 crc kubenswrapper[4768]: I0223 19:02:06.325296 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rqlzt"] Feb 23 19:02:06 crc kubenswrapper[4768]: I0223 19:02:06.403318 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5172cd4-9182-41a4-a3e4-621f6c259878-utilities\") pod \"community-operators-rqlzt\" (UID: \"d5172cd4-9182-41a4-a3e4-621f6c259878\") " pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:06 crc kubenswrapper[4768]: I0223 19:02:06.403536 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5172cd4-9182-41a4-a3e4-621f6c259878-catalog-content\") pod \"community-operators-rqlzt\" (UID: \"d5172cd4-9182-41a4-a3e4-621f6c259878\") " pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:06 crc kubenswrapper[4768]: I0223 19:02:06.403595 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wgwr\" (UniqueName: \"kubernetes.io/projected/d5172cd4-9182-41a4-a3e4-621f6c259878-kube-api-access-8wgwr\") pod \"community-operators-rqlzt\" (UID: \"d5172cd4-9182-41a4-a3e4-621f6c259878\") " pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:06 crc kubenswrapper[4768]: I0223 19:02:06.507009 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5172cd4-9182-41a4-a3e4-621f6c259878-utilities\") pod \"community-operators-rqlzt\" (UID: \"d5172cd4-9182-41a4-a3e4-621f6c259878\") " pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:06 crc kubenswrapper[4768]: I0223 19:02:06.507099 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5172cd4-9182-41a4-a3e4-621f6c259878-catalog-content\") pod \"community-operators-rqlzt\" (UID: \"d5172cd4-9182-41a4-a3e4-621f6c259878\") " pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:06 crc kubenswrapper[4768]: I0223 19:02:06.507131 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wgwr\" (UniqueName: \"kubernetes.io/projected/d5172cd4-9182-41a4-a3e4-621f6c259878-kube-api-access-8wgwr\") pod \"community-operators-rqlzt\" (UID: \"d5172cd4-9182-41a4-a3e4-621f6c259878\") " pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:06 crc kubenswrapper[4768]: I0223 19:02:06.508829 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5172cd4-9182-41a4-a3e4-621f6c259878-catalog-content\") pod \"community-operators-rqlzt\" (UID: \"d5172cd4-9182-41a4-a3e4-621f6c259878\") " pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:06 crc kubenswrapper[4768]: I0223 19:02:06.508913 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5172cd4-9182-41a4-a3e4-621f6c259878-utilities\") pod \"community-operators-rqlzt\" (UID: \"d5172cd4-9182-41a4-a3e4-621f6c259878\") " pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:06 crc kubenswrapper[4768]: I0223 19:02:06.543148 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wgwr\" (UniqueName: \"kubernetes.io/projected/d5172cd4-9182-41a4-a3e4-621f6c259878-kube-api-access-8wgwr\") pod \"community-operators-rqlzt\" (UID: \"d5172cd4-9182-41a4-a3e4-621f6c259878\") " pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:06 crc kubenswrapper[4768]: I0223 19:02:06.650716 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:07 crc kubenswrapper[4768]: I0223 19:02:07.042735 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7zx9"] Feb 23 19:02:07 crc kubenswrapper[4768]: I0223 19:02:07.052074 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-x7zx9"] Feb 23 19:02:07 crc kubenswrapper[4768]: I0223 19:02:07.159821 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rqlzt"] Feb 23 19:02:07 crc kubenswrapper[4768]: I0223 19:02:07.322238 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3214f46e-82ed-43c6-90ab-e3c001ddb38c" path="/var/lib/kubelet/pods/3214f46e-82ed-43c6-90ab-e3c001ddb38c/volumes" Feb 23 19:02:08 crc kubenswrapper[4768]: I0223 19:02:08.102209 4768 generic.go:334] "Generic (PLEG): container finished" podID="d5172cd4-9182-41a4-a3e4-621f6c259878" containerID="752610ff93ef0bfc2cdad686e26a384b39aba5c797d81829e5712cdb3730ac2d" exitCode=0 Feb 23 19:02:08 crc kubenswrapper[4768]: I0223 19:02:08.102304 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqlzt" event={"ID":"d5172cd4-9182-41a4-a3e4-621f6c259878","Type":"ContainerDied","Data":"752610ff93ef0bfc2cdad686e26a384b39aba5c797d81829e5712cdb3730ac2d"} Feb 23 19:02:08 crc kubenswrapper[4768]: I0223 19:02:08.102406 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqlzt" event={"ID":"d5172cd4-9182-41a4-a3e4-621f6c259878","Type":"ContainerStarted","Data":"f590ccb34d7f10bbed049f858f39303a6ed35aa61d0ced115c4ab5e335c9683d"} Feb 23 19:02:09 crc kubenswrapper[4768]: I0223 19:02:09.113071 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqlzt" event={"ID":"d5172cd4-9182-41a4-a3e4-621f6c259878","Type":"ContainerStarted","Data":"3eaccc7217b4a1b9c970a63f2df76fd1a714ba8dc302b9256f502a16f7842ec9"} Feb 23 19:02:09 crc kubenswrapper[4768]: I0223 19:02:09.308694 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:02:09 crc kubenswrapper[4768]: E0223 19:02:09.309926 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:02:10 crc kubenswrapper[4768]: I0223 19:02:10.129441 4768 generic.go:334] "Generic (PLEG): container finished" podID="d5172cd4-9182-41a4-a3e4-621f6c259878" containerID="3eaccc7217b4a1b9c970a63f2df76fd1a714ba8dc302b9256f502a16f7842ec9" exitCode=0 Feb 23 19:02:10 crc kubenswrapper[4768]: I0223 19:02:10.129522 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqlzt" event={"ID":"d5172cd4-9182-41a4-a3e4-621f6c259878","Type":"ContainerDied","Data":"3eaccc7217b4a1b9c970a63f2df76fd1a714ba8dc302b9256f502a16f7842ec9"} Feb 23 19:02:11 crc kubenswrapper[4768]: I0223 19:02:11.141787 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqlzt" event={"ID":"d5172cd4-9182-41a4-a3e4-621f6c259878","Type":"ContainerStarted","Data":"6c5f316bc50aba63a45efe211c9d383bb3abefbe12f2c65933e2476a8a6c4a5e"} Feb 23 19:02:11 crc kubenswrapper[4768]: I0223 19:02:11.166683 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rqlzt" podStartSLOduration=2.701950591 podStartE2EDuration="5.166662778s" podCreationTimestamp="2026-02-23 19:02:06 +0000 UTC" firstStartedPulling="2026-02-23 19:02:08.104377122 +0000 UTC m=+1723.494862922" lastFinishedPulling="2026-02-23 19:02:10.569089299 +0000 UTC m=+1725.959575109" observedRunningTime="2026-02-23 19:02:11.160661619 +0000 UTC m=+1726.551147439" watchObservedRunningTime="2026-02-23 19:02:11.166662778 +0000 UTC m=+1726.557148578" Feb 23 19:02:16 crc kubenswrapper[4768]: I0223 19:02:16.651608 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:16 crc kubenswrapper[4768]: I0223 19:02:16.652488 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:16 crc kubenswrapper[4768]: I0223 19:02:16.726643 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:17 crc kubenswrapper[4768]: I0223 19:02:17.281639 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:17 crc kubenswrapper[4768]: I0223 19:02:17.357364 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rqlzt"] Feb 23 19:02:19 crc kubenswrapper[4768]: I0223 19:02:19.222751 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rqlzt" podUID="d5172cd4-9182-41a4-a3e4-621f6c259878" containerName="registry-server" containerID="cri-o://6c5f316bc50aba63a45efe211c9d383bb3abefbe12f2c65933e2476a8a6c4a5e" gracePeriod=2 Feb 23 19:02:19 crc kubenswrapper[4768]: I0223 19:02:19.757174 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:19 crc kubenswrapper[4768]: I0223 19:02:19.941154 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5172cd4-9182-41a4-a3e4-621f6c259878-utilities\") pod \"d5172cd4-9182-41a4-a3e4-621f6c259878\" (UID: \"d5172cd4-9182-41a4-a3e4-621f6c259878\") " Feb 23 19:02:19 crc kubenswrapper[4768]: I0223 19:02:19.941342 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wgwr\" (UniqueName: \"kubernetes.io/projected/d5172cd4-9182-41a4-a3e4-621f6c259878-kube-api-access-8wgwr\") pod \"d5172cd4-9182-41a4-a3e4-621f6c259878\" (UID: \"d5172cd4-9182-41a4-a3e4-621f6c259878\") " Feb 23 19:02:19 crc kubenswrapper[4768]: I0223 19:02:19.941552 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5172cd4-9182-41a4-a3e4-621f6c259878-catalog-content\") pod \"d5172cd4-9182-41a4-a3e4-621f6c259878\" (UID: \"d5172cd4-9182-41a4-a3e4-621f6c259878\") " Feb 23 19:02:19 crc kubenswrapper[4768]: I0223 19:02:19.944864 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5172cd4-9182-41a4-a3e4-621f6c259878-utilities" (OuterVolumeSpecName: "utilities") pod "d5172cd4-9182-41a4-a3e4-621f6c259878" (UID: "d5172cd4-9182-41a4-a3e4-621f6c259878"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:02:19 crc kubenswrapper[4768]: I0223 19:02:19.952440 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5172cd4-9182-41a4-a3e4-621f6c259878-kube-api-access-8wgwr" (OuterVolumeSpecName: "kube-api-access-8wgwr") pod "d5172cd4-9182-41a4-a3e4-621f6c259878" (UID: "d5172cd4-9182-41a4-a3e4-621f6c259878"). InnerVolumeSpecName "kube-api-access-8wgwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.044896 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5172cd4-9182-41a4-a3e4-621f6c259878-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.044934 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wgwr\" (UniqueName: \"kubernetes.io/projected/d5172cd4-9182-41a4-a3e4-621f6c259878-kube-api-access-8wgwr\") on node \"crc\" DevicePath \"\"" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.048068 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5172cd4-9182-41a4-a3e4-621f6c259878-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5172cd4-9182-41a4-a3e4-621f6c259878" (UID: "d5172cd4-9182-41a4-a3e4-621f6c259878"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.147708 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5172cd4-9182-41a4-a3e4-621f6c259878-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.239699 4768 generic.go:334] "Generic (PLEG): container finished" podID="d5172cd4-9182-41a4-a3e4-621f6c259878" containerID="6c5f316bc50aba63a45efe211c9d383bb3abefbe12f2c65933e2476a8a6c4a5e" exitCode=0 Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.239775 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqlzt" event={"ID":"d5172cd4-9182-41a4-a3e4-621f6c259878","Type":"ContainerDied","Data":"6c5f316bc50aba63a45efe211c9d383bb3abefbe12f2c65933e2476a8a6c4a5e"} Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.239801 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rqlzt" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.239833 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rqlzt" event={"ID":"d5172cd4-9182-41a4-a3e4-621f6c259878","Type":"ContainerDied","Data":"f590ccb34d7f10bbed049f858f39303a6ed35aa61d0ced115c4ab5e335c9683d"} Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.239868 4768 scope.go:117] "RemoveContainer" containerID="6c5f316bc50aba63a45efe211c9d383bb3abefbe12f2c65933e2476a8a6c4a5e" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.273415 4768 scope.go:117] "RemoveContainer" containerID="3eaccc7217b4a1b9c970a63f2df76fd1a714ba8dc302b9256f502a16f7842ec9" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.309555 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rqlzt"] Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.323367 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rqlzt"] Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.331803 4768 scope.go:117] "RemoveContainer" containerID="752610ff93ef0bfc2cdad686e26a384b39aba5c797d81829e5712cdb3730ac2d" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.371889 4768 scope.go:117] "RemoveContainer" containerID="6c5f316bc50aba63a45efe211c9d383bb3abefbe12f2c65933e2476a8a6c4a5e" Feb 23 19:02:20 crc kubenswrapper[4768]: E0223 19:02:20.372966 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c5f316bc50aba63a45efe211c9d383bb3abefbe12f2c65933e2476a8a6c4a5e\": container with ID starting with 6c5f316bc50aba63a45efe211c9d383bb3abefbe12f2c65933e2476a8a6c4a5e not found: ID does not exist" containerID="6c5f316bc50aba63a45efe211c9d383bb3abefbe12f2c65933e2476a8a6c4a5e" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.373036 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c5f316bc50aba63a45efe211c9d383bb3abefbe12f2c65933e2476a8a6c4a5e"} err="failed to get container status \"6c5f316bc50aba63a45efe211c9d383bb3abefbe12f2c65933e2476a8a6c4a5e\": rpc error: code = NotFound desc = could not find container \"6c5f316bc50aba63a45efe211c9d383bb3abefbe12f2c65933e2476a8a6c4a5e\": container with ID starting with 6c5f316bc50aba63a45efe211c9d383bb3abefbe12f2c65933e2476a8a6c4a5e not found: ID does not exist" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.373080 4768 scope.go:117] "RemoveContainer" containerID="3eaccc7217b4a1b9c970a63f2df76fd1a714ba8dc302b9256f502a16f7842ec9" Feb 23 19:02:20 crc kubenswrapper[4768]: E0223 19:02:20.373671 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eaccc7217b4a1b9c970a63f2df76fd1a714ba8dc302b9256f502a16f7842ec9\": container with ID starting with 3eaccc7217b4a1b9c970a63f2df76fd1a714ba8dc302b9256f502a16f7842ec9 not found: ID does not exist" containerID="3eaccc7217b4a1b9c970a63f2df76fd1a714ba8dc302b9256f502a16f7842ec9" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.373753 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eaccc7217b4a1b9c970a63f2df76fd1a714ba8dc302b9256f502a16f7842ec9"} err="failed to get container status \"3eaccc7217b4a1b9c970a63f2df76fd1a714ba8dc302b9256f502a16f7842ec9\": rpc error: code = NotFound desc = could not find container \"3eaccc7217b4a1b9c970a63f2df76fd1a714ba8dc302b9256f502a16f7842ec9\": container with ID starting with 3eaccc7217b4a1b9c970a63f2df76fd1a714ba8dc302b9256f502a16f7842ec9 not found: ID does not exist" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.373925 4768 scope.go:117] "RemoveContainer" containerID="752610ff93ef0bfc2cdad686e26a384b39aba5c797d81829e5712cdb3730ac2d" Feb 23 19:02:20 crc kubenswrapper[4768]: E0223 19:02:20.374391 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"752610ff93ef0bfc2cdad686e26a384b39aba5c797d81829e5712cdb3730ac2d\": container with ID starting with 752610ff93ef0bfc2cdad686e26a384b39aba5c797d81829e5712cdb3730ac2d not found: ID does not exist" containerID="752610ff93ef0bfc2cdad686e26a384b39aba5c797d81829e5712cdb3730ac2d" Feb 23 19:02:20 crc kubenswrapper[4768]: I0223 19:02:20.374434 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"752610ff93ef0bfc2cdad686e26a384b39aba5c797d81829e5712cdb3730ac2d"} err="failed to get container status \"752610ff93ef0bfc2cdad686e26a384b39aba5c797d81829e5712cdb3730ac2d\": rpc error: code = NotFound desc = could not find container \"752610ff93ef0bfc2cdad686e26a384b39aba5c797d81829e5712cdb3730ac2d\": container with ID starting with 752610ff93ef0bfc2cdad686e26a384b39aba5c797d81829e5712cdb3730ac2d not found: ID does not exist" Feb 23 19:02:21 crc kubenswrapper[4768]: I0223 19:02:21.327692 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5172cd4-9182-41a4-a3e4-621f6c259878" path="/var/lib/kubelet/pods/d5172cd4-9182-41a4-a3e4-621f6c259878/volumes" Feb 23 19:02:22 crc kubenswrapper[4768]: I0223 19:02:22.307947 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:02:22 crc kubenswrapper[4768]: E0223 19:02:22.310209 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:02:30 crc kubenswrapper[4768]: I0223 19:02:30.068482 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-gxk2h"] Feb 23 19:02:30 crc kubenswrapper[4768]: I0223 19:02:30.076386 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-gxk2h"] Feb 23 19:02:30 crc kubenswrapper[4768]: I0223 19:02:30.094698 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f6mfc"] Feb 23 19:02:30 crc kubenswrapper[4768]: I0223 19:02:30.106220 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-f6mfc"] Feb 23 19:02:31 crc kubenswrapper[4768]: I0223 19:02:31.321357 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a444bad7-3d6c-4bf7-9426-db8a387f87ac" path="/var/lib/kubelet/pods/a444bad7-3d6c-4bf7-9426-db8a387f87ac/volumes" Feb 23 19:02:31 crc kubenswrapper[4768]: I0223 19:02:31.322696 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bddc6e4f-e0b0-4343-85c5-d77aa92d190c" path="/var/lib/kubelet/pods/bddc6e4f-e0b0-4343-85c5-d77aa92d190c/volumes" Feb 23 19:02:35 crc kubenswrapper[4768]: I0223 19:02:35.319654 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:02:35 crc kubenswrapper[4768]: E0223 19:02:35.320849 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:02:38 crc kubenswrapper[4768]: I0223 19:02:38.434609 4768 generic.go:334] "Generic (PLEG): container finished" podID="fd5b2e52-1d19-459a-ae2f-a78b5a7df018" containerID="283d502a3d3dd558fda6eba7c0231638d4499e19d72b5c296629eb7eb68f015e" exitCode=0 Feb 23 19:02:38 crc kubenswrapper[4768]: I0223 19:02:38.434742 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" event={"ID":"fd5b2e52-1d19-459a-ae2f-a78b5a7df018","Type":"ContainerDied","Data":"283d502a3d3dd558fda6eba7c0231638d4499e19d72b5c296629eb7eb68f015e"} Feb 23 19:02:39 crc kubenswrapper[4768]: I0223 19:02:39.881040 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.011919 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gkrn\" (UniqueName: \"kubernetes.io/projected/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-kube-api-access-8gkrn\") pod \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\" (UID: \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\") " Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.012547 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-ssh-key-openstack-edpm-ipam\") pod \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\" (UID: \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\") " Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.012661 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-inventory\") pod \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\" (UID: \"fd5b2e52-1d19-459a-ae2f-a78b5a7df018\") " Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.020633 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-kube-api-access-8gkrn" (OuterVolumeSpecName: "kube-api-access-8gkrn") pod "fd5b2e52-1d19-459a-ae2f-a78b5a7df018" (UID: "fd5b2e52-1d19-459a-ae2f-a78b5a7df018"). InnerVolumeSpecName "kube-api-access-8gkrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.041262 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fd5b2e52-1d19-459a-ae2f-a78b5a7df018" (UID: "fd5b2e52-1d19-459a-ae2f-a78b5a7df018"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.059084 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-inventory" (OuterVolumeSpecName: "inventory") pod "fd5b2e52-1d19-459a-ae2f-a78b5a7df018" (UID: "fd5b2e52-1d19-459a-ae2f-a78b5a7df018"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.114945 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gkrn\" (UniqueName: \"kubernetes.io/projected/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-kube-api-access-8gkrn\") on node \"crc\" DevicePath \"\"" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.114982 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.114997 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd5b2e52-1d19-459a-ae2f-a78b5a7df018-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.467090 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" event={"ID":"fd5b2e52-1d19-459a-ae2f-a78b5a7df018","Type":"ContainerDied","Data":"d1b3fdbeef57cee8b2b5a730c1af1d0568ea43cd08bb3553f7eb57d110b4c1a7"} Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.467161 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.467180 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1b3fdbeef57cee8b2b5a730c1af1d0568ea43cd08bb3553f7eb57d110b4c1a7" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.592126 4768 scope.go:117] "RemoveContainer" containerID="4fe34a3364c304da503e4c8404e441842558dd8a8622e327b71edbcde95226f0" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.659304 4768 scope.go:117] "RemoveContainer" containerID="70d1adc8b624176eecb1a26f13dc7bfc98c95e720ce4ed51a82dcdbd9a259c9b" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.677836 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk"] Feb 23 19:02:40 crc kubenswrapper[4768]: E0223 19:02:40.678319 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5172cd4-9182-41a4-a3e4-621f6c259878" containerName="extract-utilities" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.678338 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5172cd4-9182-41a4-a3e4-621f6c259878" containerName="extract-utilities" Feb 23 19:02:40 crc kubenswrapper[4768]: E0223 19:02:40.678359 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5172cd4-9182-41a4-a3e4-621f6c259878" containerName="registry-server" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.678368 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5172cd4-9182-41a4-a3e4-621f6c259878" containerName="registry-server" Feb 23 19:02:40 crc kubenswrapper[4768]: E0223 19:02:40.678381 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5172cd4-9182-41a4-a3e4-621f6c259878" containerName="extract-content" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.678388 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5172cd4-9182-41a4-a3e4-621f6c259878" containerName="extract-content" Feb 23 19:02:40 crc kubenswrapper[4768]: E0223 19:02:40.678420 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5b2e52-1d19-459a-ae2f-a78b5a7df018" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.678428 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5b2e52-1d19-459a-ae2f-a78b5a7df018" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.678624 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5b2e52-1d19-459a-ae2f-a78b5a7df018" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.678649 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5172cd4-9182-41a4-a3e4-621f6c259878" containerName="registry-server" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.680522 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.683103 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.683578 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.683785 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.683969 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.706296 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk"] Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.710537 4768 scope.go:117] "RemoveContainer" containerID="d4da2e2667ee9b9c1780416afcb349b28083401cf2933a91cf4459b7fea12e5f" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.837166 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1470b37-b104-4991-a626-59fcd3936f2c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk\" (UID: \"c1470b37-b104-4991-a626-59fcd3936f2c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.837256 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbh8d\" (UniqueName: \"kubernetes.io/projected/c1470b37-b104-4991-a626-59fcd3936f2c-kube-api-access-rbh8d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk\" (UID: \"c1470b37-b104-4991-a626-59fcd3936f2c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.837672 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1470b37-b104-4991-a626-59fcd3936f2c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk\" (UID: \"c1470b37-b104-4991-a626-59fcd3936f2c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.940647 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1470b37-b104-4991-a626-59fcd3936f2c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk\" (UID: \"c1470b37-b104-4991-a626-59fcd3936f2c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.940890 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1470b37-b104-4991-a626-59fcd3936f2c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk\" (UID: \"c1470b37-b104-4991-a626-59fcd3936f2c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.940996 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbh8d\" (UniqueName: \"kubernetes.io/projected/c1470b37-b104-4991-a626-59fcd3936f2c-kube-api-access-rbh8d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk\" (UID: \"c1470b37-b104-4991-a626-59fcd3936f2c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.944536 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1470b37-b104-4991-a626-59fcd3936f2c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk\" (UID: \"c1470b37-b104-4991-a626-59fcd3936f2c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.952703 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1470b37-b104-4991-a626-59fcd3936f2c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk\" (UID: \"c1470b37-b104-4991-a626-59fcd3936f2c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" Feb 23 19:02:40 crc kubenswrapper[4768]: I0223 19:02:40.959080 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbh8d\" (UniqueName: \"kubernetes.io/projected/c1470b37-b104-4991-a626-59fcd3936f2c-kube-api-access-rbh8d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk\" (UID: \"c1470b37-b104-4991-a626-59fcd3936f2c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" Feb 23 19:02:41 crc kubenswrapper[4768]: I0223 19:02:40.999979 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" Feb 23 19:02:41 crc kubenswrapper[4768]: I0223 19:02:41.584174 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk"] Feb 23 19:02:42 crc kubenswrapper[4768]: I0223 19:02:42.495868 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" event={"ID":"c1470b37-b104-4991-a626-59fcd3936f2c","Type":"ContainerStarted","Data":"4f44c9b0b5bb8f4191d3cbc349eda84bf8857e742e30a1420ceb4db04f51f229"} Feb 23 19:02:42 crc kubenswrapper[4768]: I0223 19:02:42.495927 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" event={"ID":"c1470b37-b104-4991-a626-59fcd3936f2c","Type":"ContainerStarted","Data":"50d1471dd3fed3df12f3ad5629e596971c392d3a484d9ede663936f1b900a817"} Feb 23 19:02:42 crc kubenswrapper[4768]: I0223 19:02:42.519918 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" podStartSLOduration=2.080395455 podStartE2EDuration="2.519891513s" podCreationTimestamp="2026-02-23 19:02:40 +0000 UTC" firstStartedPulling="2026-02-23 19:02:41.593506984 +0000 UTC m=+1756.983992784" lastFinishedPulling="2026-02-23 19:02:42.033003022 +0000 UTC m=+1757.423488842" observedRunningTime="2026-02-23 19:02:42.51558966 +0000 UTC m=+1757.906075470" watchObservedRunningTime="2026-02-23 19:02:42.519891513 +0000 UTC m=+1757.910377323" Feb 23 19:02:47 crc kubenswrapper[4768]: I0223 19:02:47.557576 4768 generic.go:334] "Generic (PLEG): container finished" podID="c1470b37-b104-4991-a626-59fcd3936f2c" containerID="4f44c9b0b5bb8f4191d3cbc349eda84bf8857e742e30a1420ceb4db04f51f229" exitCode=0 Feb 23 19:02:47 crc kubenswrapper[4768]: I0223 19:02:47.557678 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" event={"ID":"c1470b37-b104-4991-a626-59fcd3936f2c","Type":"ContainerDied","Data":"4f44c9b0b5bb8f4191d3cbc349eda84bf8857e742e30a1420ceb4db04f51f229"} Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.182935 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.307316 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:02:49 crc kubenswrapper[4768]: E0223 19:02:49.307692 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.347531 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbh8d\" (UniqueName: \"kubernetes.io/projected/c1470b37-b104-4991-a626-59fcd3936f2c-kube-api-access-rbh8d\") pod \"c1470b37-b104-4991-a626-59fcd3936f2c\" (UID: \"c1470b37-b104-4991-a626-59fcd3936f2c\") " Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.347724 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1470b37-b104-4991-a626-59fcd3936f2c-ssh-key-openstack-edpm-ipam\") pod \"c1470b37-b104-4991-a626-59fcd3936f2c\" (UID: \"c1470b37-b104-4991-a626-59fcd3936f2c\") " Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.347765 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1470b37-b104-4991-a626-59fcd3936f2c-inventory\") pod \"c1470b37-b104-4991-a626-59fcd3936f2c\" (UID: \"c1470b37-b104-4991-a626-59fcd3936f2c\") " Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.354491 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1470b37-b104-4991-a626-59fcd3936f2c-kube-api-access-rbh8d" (OuterVolumeSpecName: "kube-api-access-rbh8d") pod "c1470b37-b104-4991-a626-59fcd3936f2c" (UID: "c1470b37-b104-4991-a626-59fcd3936f2c"). InnerVolumeSpecName "kube-api-access-rbh8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.376228 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1470b37-b104-4991-a626-59fcd3936f2c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c1470b37-b104-4991-a626-59fcd3936f2c" (UID: "c1470b37-b104-4991-a626-59fcd3936f2c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.379037 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1470b37-b104-4991-a626-59fcd3936f2c-inventory" (OuterVolumeSpecName: "inventory") pod "c1470b37-b104-4991-a626-59fcd3936f2c" (UID: "c1470b37-b104-4991-a626-59fcd3936f2c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.450770 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbh8d\" (UniqueName: \"kubernetes.io/projected/c1470b37-b104-4991-a626-59fcd3936f2c-kube-api-access-rbh8d\") on node \"crc\" DevicePath \"\"" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.450822 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1470b37-b104-4991-a626-59fcd3936f2c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.450836 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1470b37-b104-4991-a626-59fcd3936f2c-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.581724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" event={"ID":"c1470b37-b104-4991-a626-59fcd3936f2c","Type":"ContainerDied","Data":"50d1471dd3fed3df12f3ad5629e596971c392d3a484d9ede663936f1b900a817"} Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.581773 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50d1471dd3fed3df12f3ad5629e596971c392d3a484d9ede663936f1b900a817" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.581794 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.662161 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl"] Feb 23 19:02:49 crc kubenswrapper[4768]: E0223 19:02:49.662710 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1470b37-b104-4991-a626-59fcd3936f2c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.662728 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1470b37-b104-4991-a626-59fcd3936f2c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.662944 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1470b37-b104-4991-a626-59fcd3936f2c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.663653 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.665753 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.667092 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.667234 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.667291 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.687418 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl"] Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.756684 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb4z7\" (UniqueName: \"kubernetes.io/projected/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-kube-api-access-vb4z7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lgdpl\" (UID: \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.756750 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lgdpl\" (UID: \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.756818 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lgdpl\" (UID: \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.859409 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb4z7\" (UniqueName: \"kubernetes.io/projected/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-kube-api-access-vb4z7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lgdpl\" (UID: \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.859479 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lgdpl\" (UID: \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.859547 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lgdpl\" (UID: \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.863303 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lgdpl\" (UID: \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.864937 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lgdpl\" (UID: \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.890056 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb4z7\" (UniqueName: \"kubernetes.io/projected/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-kube-api-access-vb4z7\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lgdpl\" (UID: \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" Feb 23 19:02:49 crc kubenswrapper[4768]: I0223 19:02:49.985092 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" Feb 23 19:02:50 crc kubenswrapper[4768]: I0223 19:02:50.622675 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl"] Feb 23 19:02:51 crc kubenswrapper[4768]: I0223 19:02:51.601044 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" event={"ID":"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c","Type":"ContainerStarted","Data":"4840cfe2451ca2957e216ce5eda109f791a0857889237b03be5948115a4aab92"} Feb 23 19:02:51 crc kubenswrapper[4768]: I0223 19:02:51.601789 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" event={"ID":"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c","Type":"ContainerStarted","Data":"f0f60b0cf980d20616a7b4e2c91c6342f3e6987c46660e6570efd0647da42423"} Feb 23 19:02:51 crc kubenswrapper[4768]: I0223 19:02:51.632850 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" podStartSLOduration=2.217202328 podStartE2EDuration="2.632820073s" podCreationTimestamp="2026-02-23 19:02:49 +0000 UTC" firstStartedPulling="2026-02-23 19:02:50.655070296 +0000 UTC m=+1766.045556096" lastFinishedPulling="2026-02-23 19:02:51.070688041 +0000 UTC m=+1766.461173841" observedRunningTime="2026-02-23 19:02:51.631334524 +0000 UTC m=+1767.021820364" watchObservedRunningTime="2026-02-23 19:02:51.632820073 +0000 UTC m=+1767.023305883" Feb 23 19:03:01 crc kubenswrapper[4768]: I0223 19:03:01.308484 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:03:01 crc kubenswrapper[4768]: E0223 19:03:01.312359 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:03:13 crc kubenswrapper[4768]: I0223 19:03:13.308291 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:03:13 crc kubenswrapper[4768]: E0223 19:03:13.309482 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:03:16 crc kubenswrapper[4768]: I0223 19:03:16.051893 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-n2g99"] Feb 23 19:03:16 crc kubenswrapper[4768]: I0223 19:03:16.066296 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-n2g99"] Feb 23 19:03:17 crc kubenswrapper[4768]: I0223 19:03:17.321748 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eca568c-ae88-4fbc-8f82-a20f41ee0ef7" path="/var/lib/kubelet/pods/9eca568c-ae88-4fbc-8f82-a20f41ee0ef7/volumes" Feb 23 19:03:26 crc kubenswrapper[4768]: I0223 19:03:26.307968 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:03:26 crc kubenswrapper[4768]: E0223 19:03:26.310118 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:03:28 crc kubenswrapper[4768]: I0223 19:03:28.029718 4768 generic.go:334] "Generic (PLEG): container finished" podID="fa8ac6dd-0b71-465d-8658-5c10d07f1e0c" containerID="4840cfe2451ca2957e216ce5eda109f791a0857889237b03be5948115a4aab92" exitCode=0 Feb 23 19:03:28 crc kubenswrapper[4768]: I0223 19:03:28.029830 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" event={"ID":"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c","Type":"ContainerDied","Data":"4840cfe2451ca2957e216ce5eda109f791a0857889237b03be5948115a4aab92"} Feb 23 19:03:29 crc kubenswrapper[4768]: I0223 19:03:29.555542 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" Feb 23 19:03:29 crc kubenswrapper[4768]: I0223 19:03:29.664026 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-ssh-key-openstack-edpm-ipam\") pod \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\" (UID: \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\") " Feb 23 19:03:29 crc kubenswrapper[4768]: I0223 19:03:29.664116 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-inventory\") pod \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\" (UID: \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\") " Feb 23 19:03:29 crc kubenswrapper[4768]: I0223 19:03:29.664185 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb4z7\" (UniqueName: \"kubernetes.io/projected/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-kube-api-access-vb4z7\") pod \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\" (UID: \"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c\") " Feb 23 19:03:29 crc kubenswrapper[4768]: I0223 19:03:29.670367 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-kube-api-access-vb4z7" (OuterVolumeSpecName: "kube-api-access-vb4z7") pod "fa8ac6dd-0b71-465d-8658-5c10d07f1e0c" (UID: "fa8ac6dd-0b71-465d-8658-5c10d07f1e0c"). InnerVolumeSpecName "kube-api-access-vb4z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:03:29 crc kubenswrapper[4768]: I0223 19:03:29.696201 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-inventory" (OuterVolumeSpecName: "inventory") pod "fa8ac6dd-0b71-465d-8658-5c10d07f1e0c" (UID: "fa8ac6dd-0b71-465d-8658-5c10d07f1e0c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:03:29 crc kubenswrapper[4768]: I0223 19:03:29.714239 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fa8ac6dd-0b71-465d-8658-5c10d07f1e0c" (UID: "fa8ac6dd-0b71-465d-8658-5c10d07f1e0c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:03:29 crc kubenswrapper[4768]: I0223 19:03:29.766400 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:03:29 crc kubenswrapper[4768]: I0223 19:03:29.766439 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 19:03:29 crc kubenswrapper[4768]: I0223 19:03:29.766448 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb4z7\" (UniqueName: \"kubernetes.io/projected/fa8ac6dd-0b71-465d-8658-5c10d07f1e0c-kube-api-access-vb4z7\") on node \"crc\" DevicePath \"\"" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.057400 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" event={"ID":"fa8ac6dd-0b71-465d-8658-5c10d07f1e0c","Type":"ContainerDied","Data":"f0f60b0cf980d20616a7b4e2c91c6342f3e6987c46660e6570efd0647da42423"} Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.057463 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0f60b0cf980d20616a7b4e2c91c6342f3e6987c46660e6570efd0647da42423" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.057499 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lgdpl" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.171432 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf"] Feb 23 19:03:30 crc kubenswrapper[4768]: E0223 19:03:30.172005 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8ac6dd-0b71-465d-8658-5c10d07f1e0c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.172036 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8ac6dd-0b71-465d-8658-5c10d07f1e0c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.172379 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8ac6dd-0b71-465d-8658-5c10d07f1e0c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.173399 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.176679 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.177015 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.178352 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.178630 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.185142 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf"] Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.278226 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3945e9f4-308e-4769-a7b0-2984578eda25-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf\" (UID: \"3945e9f4-308e-4769-a7b0-2984578eda25\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.278320 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3945e9f4-308e-4769-a7b0-2984578eda25-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf\" (UID: \"3945e9f4-308e-4769-a7b0-2984578eda25\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.278402 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnlf5\" (UniqueName: \"kubernetes.io/projected/3945e9f4-308e-4769-a7b0-2984578eda25-kube-api-access-lnlf5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf\" (UID: \"3945e9f4-308e-4769-a7b0-2984578eda25\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.380550 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3945e9f4-308e-4769-a7b0-2984578eda25-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf\" (UID: \"3945e9f4-308e-4769-a7b0-2984578eda25\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.380632 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3945e9f4-308e-4769-a7b0-2984578eda25-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf\" (UID: \"3945e9f4-308e-4769-a7b0-2984578eda25\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.380719 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnlf5\" (UniqueName: \"kubernetes.io/projected/3945e9f4-308e-4769-a7b0-2984578eda25-kube-api-access-lnlf5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf\" (UID: \"3945e9f4-308e-4769-a7b0-2984578eda25\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.386051 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3945e9f4-308e-4769-a7b0-2984578eda25-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf\" (UID: \"3945e9f4-308e-4769-a7b0-2984578eda25\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.387079 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3945e9f4-308e-4769-a7b0-2984578eda25-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf\" (UID: \"3945e9f4-308e-4769-a7b0-2984578eda25\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.408743 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnlf5\" (UniqueName: \"kubernetes.io/projected/3945e9f4-308e-4769-a7b0-2984578eda25-kube-api-access-lnlf5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf\" (UID: \"3945e9f4-308e-4769-a7b0-2984578eda25\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" Feb 23 19:03:30 crc kubenswrapper[4768]: I0223 19:03:30.495157 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" Feb 23 19:03:31 crc kubenswrapper[4768]: I0223 19:03:31.060587 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf"] Feb 23 19:03:32 crc kubenswrapper[4768]: I0223 19:03:32.082530 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" event={"ID":"3945e9f4-308e-4769-a7b0-2984578eda25","Type":"ContainerStarted","Data":"fa926aa45b013fb90831df470ce5be68f8bfff973cea2dc18192fffb7f68300e"} Feb 23 19:03:32 crc kubenswrapper[4768]: I0223 19:03:32.083002 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" event={"ID":"3945e9f4-308e-4769-a7b0-2984578eda25","Type":"ContainerStarted","Data":"c7093865ec701a4d835a182682314f1696616868fc05e91e7fac4e5416c4dc53"} Feb 23 19:03:32 crc kubenswrapper[4768]: I0223 19:03:32.128221 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" podStartSLOduration=1.696239203 podStartE2EDuration="2.128189411s" podCreationTimestamp="2026-02-23 19:03:30 +0000 UTC" firstStartedPulling="2026-02-23 19:03:31.079820045 +0000 UTC m=+1806.470305845" lastFinishedPulling="2026-02-23 19:03:31.511770233 +0000 UTC m=+1806.902256053" observedRunningTime="2026-02-23 19:03:32.115002532 +0000 UTC m=+1807.505488372" watchObservedRunningTime="2026-02-23 19:03:32.128189411 +0000 UTC m=+1807.518675201" Feb 23 19:03:40 crc kubenswrapper[4768]: I0223 19:03:40.309751 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:03:40 crc kubenswrapper[4768]: E0223 19:03:40.310880 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:03:40 crc kubenswrapper[4768]: I0223 19:03:40.883430 4768 scope.go:117] "RemoveContainer" containerID="711b65af74f086abcc854f43ef9c992273c395b699345777aecec2930d31774c" Feb 23 19:03:53 crc kubenswrapper[4768]: I0223 19:03:53.308352 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:03:53 crc kubenswrapper[4768]: E0223 19:03:53.309830 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:04:08 crc kubenswrapper[4768]: I0223 19:04:08.308711 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:04:08 crc kubenswrapper[4768]: E0223 19:04:08.311010 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:04:17 crc kubenswrapper[4768]: I0223 19:04:17.665069 4768 generic.go:334] "Generic (PLEG): container finished" podID="3945e9f4-308e-4769-a7b0-2984578eda25" containerID="fa926aa45b013fb90831df470ce5be68f8bfff973cea2dc18192fffb7f68300e" exitCode=0 Feb 23 19:04:17 crc kubenswrapper[4768]: I0223 19:04:17.665206 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" event={"ID":"3945e9f4-308e-4769-a7b0-2984578eda25","Type":"ContainerDied","Data":"fa926aa45b013fb90831df470ce5be68f8bfff973cea2dc18192fffb7f68300e"} Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.083933 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.165927 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnlf5\" (UniqueName: \"kubernetes.io/projected/3945e9f4-308e-4769-a7b0-2984578eda25-kube-api-access-lnlf5\") pod \"3945e9f4-308e-4769-a7b0-2984578eda25\" (UID: \"3945e9f4-308e-4769-a7b0-2984578eda25\") " Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.166153 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3945e9f4-308e-4769-a7b0-2984578eda25-ssh-key-openstack-edpm-ipam\") pod \"3945e9f4-308e-4769-a7b0-2984578eda25\" (UID: \"3945e9f4-308e-4769-a7b0-2984578eda25\") " Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.166197 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3945e9f4-308e-4769-a7b0-2984578eda25-inventory\") pod \"3945e9f4-308e-4769-a7b0-2984578eda25\" (UID: \"3945e9f4-308e-4769-a7b0-2984578eda25\") " Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.177116 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3945e9f4-308e-4769-a7b0-2984578eda25-kube-api-access-lnlf5" (OuterVolumeSpecName: "kube-api-access-lnlf5") pod "3945e9f4-308e-4769-a7b0-2984578eda25" (UID: "3945e9f4-308e-4769-a7b0-2984578eda25"). InnerVolumeSpecName "kube-api-access-lnlf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.206487 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3945e9f4-308e-4769-a7b0-2984578eda25-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3945e9f4-308e-4769-a7b0-2984578eda25" (UID: "3945e9f4-308e-4769-a7b0-2984578eda25"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.206557 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3945e9f4-308e-4769-a7b0-2984578eda25-inventory" (OuterVolumeSpecName: "inventory") pod "3945e9f4-308e-4769-a7b0-2984578eda25" (UID: "3945e9f4-308e-4769-a7b0-2984578eda25"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.268953 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnlf5\" (UniqueName: \"kubernetes.io/projected/3945e9f4-308e-4769-a7b0-2984578eda25-kube-api-access-lnlf5\") on node \"crc\" DevicePath \"\"" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.269015 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3945e9f4-308e-4769-a7b0-2984578eda25-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.269036 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3945e9f4-308e-4769-a7b0-2984578eda25-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.693824 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" event={"ID":"3945e9f4-308e-4769-a7b0-2984578eda25","Type":"ContainerDied","Data":"c7093865ec701a4d835a182682314f1696616868fc05e91e7fac4e5416c4dc53"} Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.693945 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7093865ec701a4d835a182682314f1696616868fc05e91e7fac4e5416c4dc53" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.693977 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.812314 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bcnv8"] Feb 23 19:04:19 crc kubenswrapper[4768]: E0223 19:04:19.812795 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3945e9f4-308e-4769-a7b0-2984578eda25" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.812815 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="3945e9f4-308e-4769-a7b0-2984578eda25" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.813008 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="3945e9f4-308e-4769-a7b0-2984578eda25" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.815094 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.820119 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.820411 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.824138 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bcnv8"] Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.826082 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.826105 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.884782 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/18767704-7745-4fb0-8802-3dc2bf209bbe-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bcnv8\" (UID: \"18767704-7745-4fb0-8802-3dc2bf209bbe\") " pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.884940 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pxl7\" (UniqueName: \"kubernetes.io/projected/18767704-7745-4fb0-8802-3dc2bf209bbe-kube-api-access-8pxl7\") pod \"ssh-known-hosts-edpm-deployment-bcnv8\" (UID: \"18767704-7745-4fb0-8802-3dc2bf209bbe\") " pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.885009 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18767704-7745-4fb0-8802-3dc2bf209bbe-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bcnv8\" (UID: \"18767704-7745-4fb0-8802-3dc2bf209bbe\") " pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.987614 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pxl7\" (UniqueName: \"kubernetes.io/projected/18767704-7745-4fb0-8802-3dc2bf209bbe-kube-api-access-8pxl7\") pod \"ssh-known-hosts-edpm-deployment-bcnv8\" (UID: \"18767704-7745-4fb0-8802-3dc2bf209bbe\") " pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.987813 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18767704-7745-4fb0-8802-3dc2bf209bbe-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bcnv8\" (UID: \"18767704-7745-4fb0-8802-3dc2bf209bbe\") " pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" Feb 23 19:04:19 crc kubenswrapper[4768]: I0223 19:04:19.988061 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/18767704-7745-4fb0-8802-3dc2bf209bbe-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bcnv8\" (UID: \"18767704-7745-4fb0-8802-3dc2bf209bbe\") " pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" Feb 23 19:04:20 crc kubenswrapper[4768]: I0223 19:04:19.997046 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18767704-7745-4fb0-8802-3dc2bf209bbe-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bcnv8\" (UID: \"18767704-7745-4fb0-8802-3dc2bf209bbe\") " pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" Feb 23 19:04:20 crc kubenswrapper[4768]: I0223 19:04:20.007771 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/18767704-7745-4fb0-8802-3dc2bf209bbe-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bcnv8\" (UID: \"18767704-7745-4fb0-8802-3dc2bf209bbe\") " pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" Feb 23 19:04:20 crc kubenswrapper[4768]: I0223 19:04:20.027453 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pxl7\" (UniqueName: \"kubernetes.io/projected/18767704-7745-4fb0-8802-3dc2bf209bbe-kube-api-access-8pxl7\") pod \"ssh-known-hosts-edpm-deployment-bcnv8\" (UID: \"18767704-7745-4fb0-8802-3dc2bf209bbe\") " pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" Feb 23 19:04:20 crc kubenswrapper[4768]: I0223 19:04:20.137521 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" Feb 23 19:04:20 crc kubenswrapper[4768]: I0223 19:04:20.723055 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bcnv8"] Feb 23 19:04:21 crc kubenswrapper[4768]: I0223 19:04:21.308791 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:04:21 crc kubenswrapper[4768]: E0223 19:04:21.309139 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:04:21 crc kubenswrapper[4768]: I0223 19:04:21.712724 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" event={"ID":"18767704-7745-4fb0-8802-3dc2bf209bbe","Type":"ContainerStarted","Data":"44ef4273b6f38b5ee7c0421690a0721056e43a01d1648d24b2422c3e8f02e902"} Feb 23 19:04:23 crc kubenswrapper[4768]: I0223 19:04:23.741470 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" event={"ID":"18767704-7745-4fb0-8802-3dc2bf209bbe","Type":"ContainerStarted","Data":"3e40e5531ad703c157cf3d36a9b5ddf8d95b87a3b3f7f185814a194f66435b2d"} Feb 23 19:04:23 crc kubenswrapper[4768]: I0223 19:04:23.760235 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" podStartSLOduration=3.008147596 podStartE2EDuration="4.760193827s" podCreationTimestamp="2026-02-23 19:04:19 +0000 UTC" firstStartedPulling="2026-02-23 19:04:20.740705625 +0000 UTC m=+1856.131191435" lastFinishedPulling="2026-02-23 19:04:22.492751856 +0000 UTC m=+1857.883237666" observedRunningTime="2026-02-23 19:04:23.756676964 +0000 UTC m=+1859.147162824" watchObservedRunningTime="2026-02-23 19:04:23.760193827 +0000 UTC m=+1859.150679627" Feb 23 19:04:29 crc kubenswrapper[4768]: I0223 19:04:29.802589 4768 generic.go:334] "Generic (PLEG): container finished" podID="18767704-7745-4fb0-8802-3dc2bf209bbe" containerID="3e40e5531ad703c157cf3d36a9b5ddf8d95b87a3b3f7f185814a194f66435b2d" exitCode=0 Feb 23 19:04:29 crc kubenswrapper[4768]: I0223 19:04:29.802705 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" event={"ID":"18767704-7745-4fb0-8802-3dc2bf209bbe","Type":"ContainerDied","Data":"3e40e5531ad703c157cf3d36a9b5ddf8d95b87a3b3f7f185814a194f66435b2d"} Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.294648 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.402367 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18767704-7745-4fb0-8802-3dc2bf209bbe-ssh-key-openstack-edpm-ipam\") pod \"18767704-7745-4fb0-8802-3dc2bf209bbe\" (UID: \"18767704-7745-4fb0-8802-3dc2bf209bbe\") " Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.402523 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/18767704-7745-4fb0-8802-3dc2bf209bbe-inventory-0\") pod \"18767704-7745-4fb0-8802-3dc2bf209bbe\" (UID: \"18767704-7745-4fb0-8802-3dc2bf209bbe\") " Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.402569 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pxl7\" (UniqueName: \"kubernetes.io/projected/18767704-7745-4fb0-8802-3dc2bf209bbe-kube-api-access-8pxl7\") pod \"18767704-7745-4fb0-8802-3dc2bf209bbe\" (UID: \"18767704-7745-4fb0-8802-3dc2bf209bbe\") " Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.428835 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18767704-7745-4fb0-8802-3dc2bf209bbe-kube-api-access-8pxl7" (OuterVolumeSpecName: "kube-api-access-8pxl7") pod "18767704-7745-4fb0-8802-3dc2bf209bbe" (UID: "18767704-7745-4fb0-8802-3dc2bf209bbe"). InnerVolumeSpecName "kube-api-access-8pxl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.452905 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18767704-7745-4fb0-8802-3dc2bf209bbe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "18767704-7745-4fb0-8802-3dc2bf209bbe" (UID: "18767704-7745-4fb0-8802-3dc2bf209bbe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.456058 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18767704-7745-4fb0-8802-3dc2bf209bbe-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "18767704-7745-4fb0-8802-3dc2bf209bbe" (UID: "18767704-7745-4fb0-8802-3dc2bf209bbe"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.505044 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18767704-7745-4fb0-8802-3dc2bf209bbe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.505109 4768 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/18767704-7745-4fb0-8802-3dc2bf209bbe-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.505133 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pxl7\" (UniqueName: \"kubernetes.io/projected/18767704-7745-4fb0-8802-3dc2bf209bbe-kube-api-access-8pxl7\") on node \"crc\" DevicePath \"\"" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.828109 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" event={"ID":"18767704-7745-4fb0-8802-3dc2bf209bbe","Type":"ContainerDied","Data":"44ef4273b6f38b5ee7c0421690a0721056e43a01d1648d24b2422c3e8f02e902"} Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.828587 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44ef4273b6f38b5ee7c0421690a0721056e43a01d1648d24b2422c3e8f02e902" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.828195 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bcnv8" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.916953 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v"] Feb 23 19:04:31 crc kubenswrapper[4768]: E0223 19:04:31.917682 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18767704-7745-4fb0-8802-3dc2bf209bbe" containerName="ssh-known-hosts-edpm-deployment" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.917722 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="18767704-7745-4fb0-8802-3dc2bf209bbe" containerName="ssh-known-hosts-edpm-deployment" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.918100 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="18767704-7745-4fb0-8802-3dc2bf209bbe" containerName="ssh-known-hosts-edpm-deployment" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.919101 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.922797 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.922901 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.923487 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.923535 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 19:04:31 crc kubenswrapper[4768]: I0223 19:04:31.930470 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v"] Feb 23 19:04:32 crc kubenswrapper[4768]: I0223 19:04:32.014929 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n5w7\" (UniqueName: \"kubernetes.io/projected/63675404-f203-4967-9c2b-817ff4d8715c-kube-api-access-9n5w7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ks98v\" (UID: \"63675404-f203-4967-9c2b-817ff4d8715c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" Feb 23 19:04:32 crc kubenswrapper[4768]: I0223 19:04:32.015190 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63675404-f203-4967-9c2b-817ff4d8715c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ks98v\" (UID: \"63675404-f203-4967-9c2b-817ff4d8715c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" Feb 23 19:04:32 crc kubenswrapper[4768]: I0223 19:04:32.015685 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63675404-f203-4967-9c2b-817ff4d8715c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ks98v\" (UID: \"63675404-f203-4967-9c2b-817ff4d8715c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" Feb 23 19:04:32 crc kubenswrapper[4768]: I0223 19:04:32.117772 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63675404-f203-4967-9c2b-817ff4d8715c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ks98v\" (UID: \"63675404-f203-4967-9c2b-817ff4d8715c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" Feb 23 19:04:32 crc kubenswrapper[4768]: I0223 19:04:32.118008 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63675404-f203-4967-9c2b-817ff4d8715c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ks98v\" (UID: \"63675404-f203-4967-9c2b-817ff4d8715c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" Feb 23 19:04:32 crc kubenswrapper[4768]: I0223 19:04:32.118198 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n5w7\" (UniqueName: \"kubernetes.io/projected/63675404-f203-4967-9c2b-817ff4d8715c-kube-api-access-9n5w7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ks98v\" (UID: \"63675404-f203-4967-9c2b-817ff4d8715c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" Feb 23 19:04:32 crc kubenswrapper[4768]: I0223 19:04:32.122640 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63675404-f203-4967-9c2b-817ff4d8715c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ks98v\" (UID: \"63675404-f203-4967-9c2b-817ff4d8715c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" Feb 23 19:04:32 crc kubenswrapper[4768]: I0223 19:04:32.122719 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63675404-f203-4967-9c2b-817ff4d8715c-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ks98v\" (UID: \"63675404-f203-4967-9c2b-817ff4d8715c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" Feb 23 19:04:32 crc kubenswrapper[4768]: I0223 19:04:32.149200 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n5w7\" (UniqueName: \"kubernetes.io/projected/63675404-f203-4967-9c2b-817ff4d8715c-kube-api-access-9n5w7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-ks98v\" (UID: \"63675404-f203-4967-9c2b-817ff4d8715c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" Feb 23 19:04:32 crc kubenswrapper[4768]: I0223 19:04:32.244593 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" Feb 23 19:04:32 crc kubenswrapper[4768]: I0223 19:04:32.308360 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:04:32 crc kubenswrapper[4768]: E0223 19:04:32.308643 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:04:32 crc kubenswrapper[4768]: I0223 19:04:32.854052 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v"] Feb 23 19:04:33 crc kubenswrapper[4768]: I0223 19:04:33.861108 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" event={"ID":"63675404-f203-4967-9c2b-817ff4d8715c","Type":"ContainerStarted","Data":"882974fd7bbc028be60e8c58c3af3093cc301ed02bef60ca2776387a8a8a685d"} Feb 23 19:04:33 crc kubenswrapper[4768]: I0223 19:04:33.861628 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" event={"ID":"63675404-f203-4967-9c2b-817ff4d8715c","Type":"ContainerStarted","Data":"588599a4f67f8208274bb93daf9df3114acd33768dd5bef68ad63f1190e67ad9"} Feb 23 19:04:33 crc kubenswrapper[4768]: I0223 19:04:33.892629 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" podStartSLOduration=2.474873137 podStartE2EDuration="2.892600368s" podCreationTimestamp="2026-02-23 19:04:31 +0000 UTC" firstStartedPulling="2026-02-23 19:04:32.86377855 +0000 UTC m=+1868.254264350" lastFinishedPulling="2026-02-23 19:04:33.281505791 +0000 UTC m=+1868.671991581" observedRunningTime="2026-02-23 19:04:33.880682623 +0000 UTC m=+1869.271168443" watchObservedRunningTime="2026-02-23 19:04:33.892600368 +0000 UTC m=+1869.283086208" Feb 23 19:04:40 crc kubenswrapper[4768]: I0223 19:04:40.946926 4768 generic.go:334] "Generic (PLEG): container finished" podID="63675404-f203-4967-9c2b-817ff4d8715c" containerID="882974fd7bbc028be60e8c58c3af3093cc301ed02bef60ca2776387a8a8a685d" exitCode=0 Feb 23 19:04:40 crc kubenswrapper[4768]: I0223 19:04:40.947666 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" event={"ID":"63675404-f203-4967-9c2b-817ff4d8715c","Type":"ContainerDied","Data":"882974fd7bbc028be60e8c58c3af3093cc301ed02bef60ca2776387a8a8a685d"} Feb 23 19:04:42 crc kubenswrapper[4768]: I0223 19:04:42.463579 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" Feb 23 19:04:42 crc kubenswrapper[4768]: I0223 19:04:42.556699 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n5w7\" (UniqueName: \"kubernetes.io/projected/63675404-f203-4967-9c2b-817ff4d8715c-kube-api-access-9n5w7\") pod \"63675404-f203-4967-9c2b-817ff4d8715c\" (UID: \"63675404-f203-4967-9c2b-817ff4d8715c\") " Feb 23 19:04:42 crc kubenswrapper[4768]: I0223 19:04:42.556934 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63675404-f203-4967-9c2b-817ff4d8715c-inventory\") pod \"63675404-f203-4967-9c2b-817ff4d8715c\" (UID: \"63675404-f203-4967-9c2b-817ff4d8715c\") " Feb 23 19:04:42 crc kubenswrapper[4768]: I0223 19:04:42.557102 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63675404-f203-4967-9c2b-817ff4d8715c-ssh-key-openstack-edpm-ipam\") pod \"63675404-f203-4967-9c2b-817ff4d8715c\" (UID: \"63675404-f203-4967-9c2b-817ff4d8715c\") " Feb 23 19:04:42 crc kubenswrapper[4768]: I0223 19:04:42.564925 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63675404-f203-4967-9c2b-817ff4d8715c-kube-api-access-9n5w7" (OuterVolumeSpecName: "kube-api-access-9n5w7") pod "63675404-f203-4967-9c2b-817ff4d8715c" (UID: "63675404-f203-4967-9c2b-817ff4d8715c"). InnerVolumeSpecName "kube-api-access-9n5w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:04:42 crc kubenswrapper[4768]: I0223 19:04:42.592126 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63675404-f203-4967-9c2b-817ff4d8715c-inventory" (OuterVolumeSpecName: "inventory") pod "63675404-f203-4967-9c2b-817ff4d8715c" (UID: "63675404-f203-4967-9c2b-817ff4d8715c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:04:42 crc kubenswrapper[4768]: I0223 19:04:42.597218 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63675404-f203-4967-9c2b-817ff4d8715c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "63675404-f203-4967-9c2b-817ff4d8715c" (UID: "63675404-f203-4967-9c2b-817ff4d8715c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:04:42 crc kubenswrapper[4768]: I0223 19:04:42.659951 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63675404-f203-4967-9c2b-817ff4d8715c-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 19:04:42 crc kubenswrapper[4768]: I0223 19:04:42.659995 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/63675404-f203-4967-9c2b-817ff4d8715c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:04:42 crc kubenswrapper[4768]: I0223 19:04:42.660010 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n5w7\" (UniqueName: \"kubernetes.io/projected/63675404-f203-4967-9c2b-817ff4d8715c-kube-api-access-9n5w7\") on node \"crc\" DevicePath \"\"" Feb 23 19:04:42 crc kubenswrapper[4768]: I0223 19:04:42.971990 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" event={"ID":"63675404-f203-4967-9c2b-817ff4d8715c","Type":"ContainerDied","Data":"588599a4f67f8208274bb93daf9df3114acd33768dd5bef68ad63f1190e67ad9"} Feb 23 19:04:42 crc kubenswrapper[4768]: I0223 19:04:42.972051 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="588599a4f67f8208274bb93daf9df3114acd33768dd5bef68ad63f1190e67ad9" Feb 23 19:04:42 crc kubenswrapper[4768]: I0223 19:04:42.972111 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-ks98v" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.139820 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc"] Feb 23 19:04:43 crc kubenswrapper[4768]: E0223 19:04:43.140800 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63675404-f203-4967-9c2b-817ff4d8715c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.141069 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="63675404-f203-4967-9c2b-817ff4d8715c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.141710 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="63675404-f203-4967-9c2b-817ff4d8715c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.143493 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.148284 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.148377 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.148670 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.148851 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.151814 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc"] Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.274120 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68e380e8-220c-4c0e-88e4-a818fb37fe57-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc\" (UID: \"68e380e8-220c-4c0e-88e4-a818fb37fe57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.274235 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68e380e8-220c-4c0e-88e4-a818fb37fe57-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc\" (UID: \"68e380e8-220c-4c0e-88e4-a818fb37fe57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.274351 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cch7n\" (UniqueName: \"kubernetes.io/projected/68e380e8-220c-4c0e-88e4-a818fb37fe57-kube-api-access-cch7n\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc\" (UID: \"68e380e8-220c-4c0e-88e4-a818fb37fe57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.376323 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cch7n\" (UniqueName: \"kubernetes.io/projected/68e380e8-220c-4c0e-88e4-a818fb37fe57-kube-api-access-cch7n\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc\" (UID: \"68e380e8-220c-4c0e-88e4-a818fb37fe57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.376933 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68e380e8-220c-4c0e-88e4-a818fb37fe57-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc\" (UID: \"68e380e8-220c-4c0e-88e4-a818fb37fe57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.377210 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68e380e8-220c-4c0e-88e4-a818fb37fe57-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc\" (UID: \"68e380e8-220c-4c0e-88e4-a818fb37fe57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.380860 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68e380e8-220c-4c0e-88e4-a818fb37fe57-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc\" (UID: \"68e380e8-220c-4c0e-88e4-a818fb37fe57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.388501 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68e380e8-220c-4c0e-88e4-a818fb37fe57-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc\" (UID: \"68e380e8-220c-4c0e-88e4-a818fb37fe57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.396018 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cch7n\" (UniqueName: \"kubernetes.io/projected/68e380e8-220c-4c0e-88e4-a818fb37fe57-kube-api-access-cch7n\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc\" (UID: \"68e380e8-220c-4c0e-88e4-a818fb37fe57\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.474708 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.866726 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc"] Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.878455 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 19:04:43 crc kubenswrapper[4768]: I0223 19:04:43.986271 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" event={"ID":"68e380e8-220c-4c0e-88e4-a818fb37fe57","Type":"ContainerStarted","Data":"0841add87c353833d0cbe5bb17d4e8c92c10d4a6d388dc50aafe7cacd2b3956c"} Feb 23 19:04:45 crc kubenswrapper[4768]: I0223 19:04:44.999655 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" event={"ID":"68e380e8-220c-4c0e-88e4-a818fb37fe57","Type":"ContainerStarted","Data":"375547c76d58b33d6ea90d073b30fe38a236b2d537190e08cd02bd84f80b8383"} Feb 23 19:04:45 crc kubenswrapper[4768]: I0223 19:04:45.022595 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" podStartSLOduration=1.579861366 podStartE2EDuration="2.022579598s" podCreationTimestamp="2026-02-23 19:04:43 +0000 UTC" firstStartedPulling="2026-02-23 19:04:43.878065413 +0000 UTC m=+1879.268551233" lastFinishedPulling="2026-02-23 19:04:44.320783665 +0000 UTC m=+1879.711269465" observedRunningTime="2026-02-23 19:04:45.021876569 +0000 UTC m=+1880.412362409" watchObservedRunningTime="2026-02-23 19:04:45.022579598 +0000 UTC m=+1880.413065398" Feb 23 19:04:46 crc kubenswrapper[4768]: I0223 19:04:46.307432 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:04:47 crc kubenswrapper[4768]: I0223 19:04:47.025590 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"37b346db32eb6dfe9f95d37d0dec4a9f1b2f5b2115924d129b042376854bd97b"} Feb 23 19:04:54 crc kubenswrapper[4768]: I0223 19:04:54.097930 4768 generic.go:334] "Generic (PLEG): container finished" podID="68e380e8-220c-4c0e-88e4-a818fb37fe57" containerID="375547c76d58b33d6ea90d073b30fe38a236b2d537190e08cd02bd84f80b8383" exitCode=0 Feb 23 19:04:54 crc kubenswrapper[4768]: I0223 19:04:54.098067 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" event={"ID":"68e380e8-220c-4c0e-88e4-a818fb37fe57","Type":"ContainerDied","Data":"375547c76d58b33d6ea90d073b30fe38a236b2d537190e08cd02bd84f80b8383"} Feb 23 19:04:55 crc kubenswrapper[4768]: I0223 19:04:55.644788 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" Feb 23 19:04:55 crc kubenswrapper[4768]: I0223 19:04:55.758418 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cch7n\" (UniqueName: \"kubernetes.io/projected/68e380e8-220c-4c0e-88e4-a818fb37fe57-kube-api-access-cch7n\") pod \"68e380e8-220c-4c0e-88e4-a818fb37fe57\" (UID: \"68e380e8-220c-4c0e-88e4-a818fb37fe57\") " Feb 23 19:04:55 crc kubenswrapper[4768]: I0223 19:04:55.758774 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68e380e8-220c-4c0e-88e4-a818fb37fe57-ssh-key-openstack-edpm-ipam\") pod \"68e380e8-220c-4c0e-88e4-a818fb37fe57\" (UID: \"68e380e8-220c-4c0e-88e4-a818fb37fe57\") " Feb 23 19:04:55 crc kubenswrapper[4768]: I0223 19:04:55.759481 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68e380e8-220c-4c0e-88e4-a818fb37fe57-inventory\") pod \"68e380e8-220c-4c0e-88e4-a818fb37fe57\" (UID: \"68e380e8-220c-4c0e-88e4-a818fb37fe57\") " Feb 23 19:04:55 crc kubenswrapper[4768]: I0223 19:04:55.765496 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68e380e8-220c-4c0e-88e4-a818fb37fe57-kube-api-access-cch7n" (OuterVolumeSpecName: "kube-api-access-cch7n") pod "68e380e8-220c-4c0e-88e4-a818fb37fe57" (UID: "68e380e8-220c-4c0e-88e4-a818fb37fe57"). InnerVolumeSpecName "kube-api-access-cch7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:04:55 crc kubenswrapper[4768]: I0223 19:04:55.786539 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68e380e8-220c-4c0e-88e4-a818fb37fe57-inventory" (OuterVolumeSpecName: "inventory") pod "68e380e8-220c-4c0e-88e4-a818fb37fe57" (UID: "68e380e8-220c-4c0e-88e4-a818fb37fe57"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:04:55 crc kubenswrapper[4768]: I0223 19:04:55.794168 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68e380e8-220c-4c0e-88e4-a818fb37fe57-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "68e380e8-220c-4c0e-88e4-a818fb37fe57" (UID: "68e380e8-220c-4c0e-88e4-a818fb37fe57"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:04:55 crc kubenswrapper[4768]: I0223 19:04:55.862294 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/68e380e8-220c-4c0e-88e4-a818fb37fe57-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 19:04:55 crc kubenswrapper[4768]: I0223 19:04:55.862325 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cch7n\" (UniqueName: \"kubernetes.io/projected/68e380e8-220c-4c0e-88e4-a818fb37fe57-kube-api-access-cch7n\") on node \"crc\" DevicePath \"\"" Feb 23 19:04:55 crc kubenswrapper[4768]: I0223 19:04:55.862475 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/68e380e8-220c-4c0e-88e4-a818fb37fe57-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.123064 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" event={"ID":"68e380e8-220c-4c0e-88e4-a818fb37fe57","Type":"ContainerDied","Data":"0841add87c353833d0cbe5bb17d4e8c92c10d4a6d388dc50aafe7cacd2b3956c"} Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.123577 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0841add87c353833d0cbe5bb17d4e8c92c10d4a6d388dc50aafe7cacd2b3956c" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.123158 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.220705 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f"] Feb 23 19:04:56 crc kubenswrapper[4768]: E0223 19:04:56.221084 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68e380e8-220c-4c0e-88e4-a818fb37fe57" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.221099 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="68e380e8-220c-4c0e-88e4-a818fb37fe57" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.221307 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="68e380e8-220c-4c0e-88e4-a818fb37fe57" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.221981 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.225518 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.225715 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.226144 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.226177 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.226644 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.226216 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.226451 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.231975 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.238149 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f"] Feb 23 19:04:56 crc kubenswrapper[4768]: E0223 19:04:56.335106 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68e380e8_220c_4c0e_88e4_a818fb37fe57.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68e380e8_220c_4c0e_88e4_a818fb37fe57.slice/crio-0841add87c353833d0cbe5bb17d4e8c92c10d4a6d388dc50aafe7cacd2b3956c\": RecentStats: unable to find data in memory cache]" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.372584 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.372661 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.372695 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.372716 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.372750 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.373022 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.373113 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl7ch\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-kube-api-access-dl7ch\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.373138 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.373202 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.373318 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.373435 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.373473 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.373507 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.373543 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.475591 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl7ch\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-kube-api-access-dl7ch\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.475644 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.475687 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.475727 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.475770 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.475798 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.475826 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.475853 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.475971 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.476010 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.476049 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.476079 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.476114 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.476169 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.483085 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.483095 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.483957 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.484039 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.484242 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.484992 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.485340 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.485580 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.485980 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.486991 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.487149 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.488510 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.492038 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.499669 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl7ch\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-kube-api-access-dl7ch\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:56 crc kubenswrapper[4768]: I0223 19:04:56.545822 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:04:57 crc kubenswrapper[4768]: I0223 19:04:57.079168 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f"] Feb 23 19:04:57 crc kubenswrapper[4768]: I0223 19:04:57.134390 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" event={"ID":"de5a4703-0650-427d-a791-f9a3386ca413","Type":"ContainerStarted","Data":"58c176f01156fecb3e2373b047fd349787394a0f5e04ed7e6bc6c6182ee59780"} Feb 23 19:04:58 crc kubenswrapper[4768]: I0223 19:04:58.142522 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" event={"ID":"de5a4703-0650-427d-a791-f9a3386ca413","Type":"ContainerStarted","Data":"562cf2c95a16cd01cd446652ec4967bb3387ca4c58c9a4fa88bea9eb6ab664a7"} Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.395877 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" podStartSLOduration=6.977817984 podStartE2EDuration="7.395851155s" podCreationTimestamp="2026-02-23 19:04:56 +0000 UTC" firstStartedPulling="2026-02-23 19:04:57.078310288 +0000 UTC m=+1892.468796088" lastFinishedPulling="2026-02-23 19:04:57.496343449 +0000 UTC m=+1892.886829259" observedRunningTime="2026-02-23 19:04:58.168593259 +0000 UTC m=+1893.559079139" watchObservedRunningTime="2026-02-23 19:05:03.395851155 +0000 UTC m=+1898.786336945" Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.417492 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-chtmh"] Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.422808 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.433424 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chtmh"] Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.536824 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt6r6\" (UniqueName: \"kubernetes.io/projected/f208fafa-4cf5-46f6-89ee-13d96ef26070-kube-api-access-dt6r6\") pod \"certified-operators-chtmh\" (UID: \"f208fafa-4cf5-46f6-89ee-13d96ef26070\") " pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.536946 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f208fafa-4cf5-46f6-89ee-13d96ef26070-utilities\") pod \"certified-operators-chtmh\" (UID: \"f208fafa-4cf5-46f6-89ee-13d96ef26070\") " pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.536995 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f208fafa-4cf5-46f6-89ee-13d96ef26070-catalog-content\") pod \"certified-operators-chtmh\" (UID: \"f208fafa-4cf5-46f6-89ee-13d96ef26070\") " pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.639623 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt6r6\" (UniqueName: \"kubernetes.io/projected/f208fafa-4cf5-46f6-89ee-13d96ef26070-kube-api-access-dt6r6\") pod \"certified-operators-chtmh\" (UID: \"f208fafa-4cf5-46f6-89ee-13d96ef26070\") " pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.639802 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f208fafa-4cf5-46f6-89ee-13d96ef26070-utilities\") pod \"certified-operators-chtmh\" (UID: \"f208fafa-4cf5-46f6-89ee-13d96ef26070\") " pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.639880 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f208fafa-4cf5-46f6-89ee-13d96ef26070-catalog-content\") pod \"certified-operators-chtmh\" (UID: \"f208fafa-4cf5-46f6-89ee-13d96ef26070\") " pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.640574 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f208fafa-4cf5-46f6-89ee-13d96ef26070-catalog-content\") pod \"certified-operators-chtmh\" (UID: \"f208fafa-4cf5-46f6-89ee-13d96ef26070\") " pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.640919 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f208fafa-4cf5-46f6-89ee-13d96ef26070-utilities\") pod \"certified-operators-chtmh\" (UID: \"f208fafa-4cf5-46f6-89ee-13d96ef26070\") " pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.688301 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt6r6\" (UniqueName: \"kubernetes.io/projected/f208fafa-4cf5-46f6-89ee-13d96ef26070-kube-api-access-dt6r6\") pod \"certified-operators-chtmh\" (UID: \"f208fafa-4cf5-46f6-89ee-13d96ef26070\") " pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:03 crc kubenswrapper[4768]: I0223 19:05:03.778071 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:04 crc kubenswrapper[4768]: I0223 19:05:04.377514 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-chtmh"] Feb 23 19:05:05 crc kubenswrapper[4768]: I0223 19:05:05.227922 4768 generic.go:334] "Generic (PLEG): container finished" podID="f208fafa-4cf5-46f6-89ee-13d96ef26070" containerID="576a9d473ac5a9482396826cdacc4d67b0c442d101cd9ea5a0e4e5a8824dc6d0" exitCode=0 Feb 23 19:05:05 crc kubenswrapper[4768]: I0223 19:05:05.228049 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chtmh" event={"ID":"f208fafa-4cf5-46f6-89ee-13d96ef26070","Type":"ContainerDied","Data":"576a9d473ac5a9482396826cdacc4d67b0c442d101cd9ea5a0e4e5a8824dc6d0"} Feb 23 19:05:05 crc kubenswrapper[4768]: I0223 19:05:05.228372 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chtmh" event={"ID":"f208fafa-4cf5-46f6-89ee-13d96ef26070","Type":"ContainerStarted","Data":"b351c91cdf8c248ec1fc918a22ab94b4bc0b76cf0af6434753057c3924047862"} Feb 23 19:05:06 crc kubenswrapper[4768]: I0223 19:05:06.238345 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chtmh" event={"ID":"f208fafa-4cf5-46f6-89ee-13d96ef26070","Type":"ContainerStarted","Data":"433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15"} Feb 23 19:05:06 crc kubenswrapper[4768]: E0223 19:05:06.637999 4768 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf208fafa_4cf5_46f6_89ee_13d96ef26070.slice/crio-conmon-433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf208fafa_4cf5_46f6_89ee_13d96ef26070.slice/crio-433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15.scope\": RecentStats: unable to find data in memory cache]" Feb 23 19:05:07 crc kubenswrapper[4768]: I0223 19:05:07.288873 4768 generic.go:334] "Generic (PLEG): container finished" podID="f208fafa-4cf5-46f6-89ee-13d96ef26070" containerID="433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15" exitCode=0 Feb 23 19:05:07 crc kubenswrapper[4768]: I0223 19:05:07.288943 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chtmh" event={"ID":"f208fafa-4cf5-46f6-89ee-13d96ef26070","Type":"ContainerDied","Data":"433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15"} Feb 23 19:05:08 crc kubenswrapper[4768]: I0223 19:05:08.301159 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chtmh" event={"ID":"f208fafa-4cf5-46f6-89ee-13d96ef26070","Type":"ContainerStarted","Data":"a06bbe633969fd7c95a4e8a32fbcd5dd0987b0828dc75e6f79faf1baa427d599"} Feb 23 19:05:08 crc kubenswrapper[4768]: I0223 19:05:08.329154 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-chtmh" podStartSLOduration=2.766853245 podStartE2EDuration="5.329131542s" podCreationTimestamp="2026-02-23 19:05:03 +0000 UTC" firstStartedPulling="2026-02-23 19:05:05.2305899 +0000 UTC m=+1900.621075740" lastFinishedPulling="2026-02-23 19:05:07.792868237 +0000 UTC m=+1903.183354037" observedRunningTime="2026-02-23 19:05:08.322213504 +0000 UTC m=+1903.712699324" watchObservedRunningTime="2026-02-23 19:05:08.329131542 +0000 UTC m=+1903.719617352" Feb 23 19:05:13 crc kubenswrapper[4768]: I0223 19:05:13.778871 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:13 crc kubenswrapper[4768]: I0223 19:05:13.779507 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:13 crc kubenswrapper[4768]: I0223 19:05:13.838656 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:14 crc kubenswrapper[4768]: I0223 19:05:14.437696 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:14 crc kubenswrapper[4768]: I0223 19:05:14.507742 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chtmh"] Feb 23 19:05:16 crc kubenswrapper[4768]: I0223 19:05:16.404755 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-chtmh" podUID="f208fafa-4cf5-46f6-89ee-13d96ef26070" containerName="registry-server" containerID="cri-o://a06bbe633969fd7c95a4e8a32fbcd5dd0987b0828dc75e6f79faf1baa427d599" gracePeriod=2 Feb 23 19:05:16 crc kubenswrapper[4768]: I0223 19:05:16.876280 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:16 crc kubenswrapper[4768]: I0223 19:05:16.958803 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f208fafa-4cf5-46f6-89ee-13d96ef26070-utilities\") pod \"f208fafa-4cf5-46f6-89ee-13d96ef26070\" (UID: \"f208fafa-4cf5-46f6-89ee-13d96ef26070\") " Feb 23 19:05:16 crc kubenswrapper[4768]: I0223 19:05:16.958875 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f208fafa-4cf5-46f6-89ee-13d96ef26070-catalog-content\") pod \"f208fafa-4cf5-46f6-89ee-13d96ef26070\" (UID: \"f208fafa-4cf5-46f6-89ee-13d96ef26070\") " Feb 23 19:05:16 crc kubenswrapper[4768]: I0223 19:05:16.959166 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt6r6\" (UniqueName: \"kubernetes.io/projected/f208fafa-4cf5-46f6-89ee-13d96ef26070-kube-api-access-dt6r6\") pod \"f208fafa-4cf5-46f6-89ee-13d96ef26070\" (UID: \"f208fafa-4cf5-46f6-89ee-13d96ef26070\") " Feb 23 19:05:16 crc kubenswrapper[4768]: I0223 19:05:16.960331 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f208fafa-4cf5-46f6-89ee-13d96ef26070-utilities" (OuterVolumeSpecName: "utilities") pod "f208fafa-4cf5-46f6-89ee-13d96ef26070" (UID: "f208fafa-4cf5-46f6-89ee-13d96ef26070"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:05:16 crc kubenswrapper[4768]: I0223 19:05:16.968116 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f208fafa-4cf5-46f6-89ee-13d96ef26070-kube-api-access-dt6r6" (OuterVolumeSpecName: "kube-api-access-dt6r6") pod "f208fafa-4cf5-46f6-89ee-13d96ef26070" (UID: "f208fafa-4cf5-46f6-89ee-13d96ef26070"). InnerVolumeSpecName "kube-api-access-dt6r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.061776 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt6r6\" (UniqueName: \"kubernetes.io/projected/f208fafa-4cf5-46f6-89ee-13d96ef26070-kube-api-access-dt6r6\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.061818 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f208fafa-4cf5-46f6-89ee-13d96ef26070-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.421442 4768 generic.go:334] "Generic (PLEG): container finished" podID="f208fafa-4cf5-46f6-89ee-13d96ef26070" containerID="a06bbe633969fd7c95a4e8a32fbcd5dd0987b0828dc75e6f79faf1baa427d599" exitCode=0 Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.421510 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chtmh" event={"ID":"f208fafa-4cf5-46f6-89ee-13d96ef26070","Type":"ContainerDied","Data":"a06bbe633969fd7c95a4e8a32fbcd5dd0987b0828dc75e6f79faf1baa427d599"} Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.421555 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-chtmh" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.421587 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-chtmh" event={"ID":"f208fafa-4cf5-46f6-89ee-13d96ef26070","Type":"ContainerDied","Data":"b351c91cdf8c248ec1fc918a22ab94b4bc0b76cf0af6434753057c3924047862"} Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.421614 4768 scope.go:117] "RemoveContainer" containerID="a06bbe633969fd7c95a4e8a32fbcd5dd0987b0828dc75e6f79faf1baa427d599" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.458484 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f208fafa-4cf5-46f6-89ee-13d96ef26070-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f208fafa-4cf5-46f6-89ee-13d96ef26070" (UID: "f208fafa-4cf5-46f6-89ee-13d96ef26070"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.466268 4768 scope.go:117] "RemoveContainer" containerID="433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.474399 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f208fafa-4cf5-46f6-89ee-13d96ef26070-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.491939 4768 scope.go:117] "RemoveContainer" containerID="576a9d473ac5a9482396826cdacc4d67b0c442d101cd9ea5a0e4e5a8824dc6d0" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.558084 4768 scope.go:117] "RemoveContainer" containerID="a06bbe633969fd7c95a4e8a32fbcd5dd0987b0828dc75e6f79faf1baa427d599" Feb 23 19:05:17 crc kubenswrapper[4768]: E0223 19:05:17.559194 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a06bbe633969fd7c95a4e8a32fbcd5dd0987b0828dc75e6f79faf1baa427d599\": container with ID starting with a06bbe633969fd7c95a4e8a32fbcd5dd0987b0828dc75e6f79faf1baa427d599 not found: ID does not exist" containerID="a06bbe633969fd7c95a4e8a32fbcd5dd0987b0828dc75e6f79faf1baa427d599" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.559239 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06bbe633969fd7c95a4e8a32fbcd5dd0987b0828dc75e6f79faf1baa427d599"} err="failed to get container status \"a06bbe633969fd7c95a4e8a32fbcd5dd0987b0828dc75e6f79faf1baa427d599\": rpc error: code = NotFound desc = could not find container \"a06bbe633969fd7c95a4e8a32fbcd5dd0987b0828dc75e6f79faf1baa427d599\": container with ID starting with a06bbe633969fd7c95a4e8a32fbcd5dd0987b0828dc75e6f79faf1baa427d599 not found: ID does not exist" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.559331 4768 scope.go:117] "RemoveContainer" containerID="433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15" Feb 23 19:05:17 crc kubenswrapper[4768]: E0223 19:05:17.559723 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15\": container with ID starting with 433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15 not found: ID does not exist" containerID="433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.559755 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15"} err="failed to get container status \"433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15\": rpc error: code = NotFound desc = could not find container \"433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15\": container with ID starting with 433756526b6ae336621475558c823180f5443f5fc6a9d8d4c03722bd2e008f15 not found: ID does not exist" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.559772 4768 scope.go:117] "RemoveContainer" containerID="576a9d473ac5a9482396826cdacc4d67b0c442d101cd9ea5a0e4e5a8824dc6d0" Feb 23 19:05:17 crc kubenswrapper[4768]: E0223 19:05:17.560211 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"576a9d473ac5a9482396826cdacc4d67b0c442d101cd9ea5a0e4e5a8824dc6d0\": container with ID starting with 576a9d473ac5a9482396826cdacc4d67b0c442d101cd9ea5a0e4e5a8824dc6d0 not found: ID does not exist" containerID="576a9d473ac5a9482396826cdacc4d67b0c442d101cd9ea5a0e4e5a8824dc6d0" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.560284 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"576a9d473ac5a9482396826cdacc4d67b0c442d101cd9ea5a0e4e5a8824dc6d0"} err="failed to get container status \"576a9d473ac5a9482396826cdacc4d67b0c442d101cd9ea5a0e4e5a8824dc6d0\": rpc error: code = NotFound desc = could not find container \"576a9d473ac5a9482396826cdacc4d67b0c442d101cd9ea5a0e4e5a8824dc6d0\": container with ID starting with 576a9d473ac5a9482396826cdacc4d67b0c442d101cd9ea5a0e4e5a8824dc6d0 not found: ID does not exist" Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.774744 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-chtmh"] Feb 23 19:05:17 crc kubenswrapper[4768]: I0223 19:05:17.789360 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-chtmh"] Feb 23 19:05:19 crc kubenswrapper[4768]: I0223 19:05:19.325871 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f208fafa-4cf5-46f6-89ee-13d96ef26070" path="/var/lib/kubelet/pods/f208fafa-4cf5-46f6-89ee-13d96ef26070/volumes" Feb 23 19:05:32 crc kubenswrapper[4768]: I0223 19:05:32.605469 4768 generic.go:334] "Generic (PLEG): container finished" podID="de5a4703-0650-427d-a791-f9a3386ca413" containerID="562cf2c95a16cd01cd446652ec4967bb3387ca4c58c9a4fa88bea9eb6ab664a7" exitCode=0 Feb 23 19:05:32 crc kubenswrapper[4768]: I0223 19:05:32.605539 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" event={"ID":"de5a4703-0650-427d-a791-f9a3386ca413","Type":"ContainerDied","Data":"562cf2c95a16cd01cd446652ec4967bb3387ca4c58c9a4fa88bea9eb6ab664a7"} Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.186446 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.359439 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-ovn-combined-ca-bundle\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.359489 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-ovn-default-certs-0\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.359551 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-neutron-metadata-combined-ca-bundle\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.359575 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-nova-combined-ca-bundle\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.360643 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl7ch\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-kube-api-access-dl7ch\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.360759 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-bootstrap-combined-ca-bundle\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.360793 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-repo-setup-combined-ca-bundle\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.360835 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.360874 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-ssh-key-openstack-edpm-ipam\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.360911 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-libvirt-combined-ca-bundle\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.360951 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.360998 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.361020 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-telemetry-combined-ca-bundle\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.361087 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-inventory\") pod \"de5a4703-0650-427d-a791-f9a3386ca413\" (UID: \"de5a4703-0650-427d-a791-f9a3386ca413\") " Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.366706 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.367360 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.370977 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.371604 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.372110 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.372233 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.372346 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.372758 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.372870 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-kube-api-access-dl7ch" (OuterVolumeSpecName: "kube-api-access-dl7ch") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "kube-api-access-dl7ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.380417 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.380475 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.380489 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.402467 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-inventory" (OuterVolumeSpecName: "inventory") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.405418 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "de5a4703-0650-427d-a791-f9a3386ca413" (UID: "de5a4703-0650-427d-a791-f9a3386ca413"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.464729 4768 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.464787 4768 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.464811 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl7ch\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-kube-api-access-dl7ch\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.464835 4768 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.464856 4768 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.464879 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.464903 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.464924 4768 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.464945 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.464969 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.464994 4768 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.465014 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.465035 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de5a4703-0650-427d-a791-f9a3386ca413-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.465054 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/de5a4703-0650-427d-a791-f9a3386ca413-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.634105 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" event={"ID":"de5a4703-0650-427d-a791-f9a3386ca413","Type":"ContainerDied","Data":"58c176f01156fecb3e2373b047fd349787394a0f5e04ed7e6bc6c6182ee59780"} Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.634185 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.634195 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58c176f01156fecb3e2373b047fd349787394a0f5e04ed7e6bc6c6182ee59780" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.762658 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc"] Feb 23 19:05:34 crc kubenswrapper[4768]: E0223 19:05:34.763572 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f208fafa-4cf5-46f6-89ee-13d96ef26070" containerName="registry-server" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.763599 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f208fafa-4cf5-46f6-89ee-13d96ef26070" containerName="registry-server" Feb 23 19:05:34 crc kubenswrapper[4768]: E0223 19:05:34.763620 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f208fafa-4cf5-46f6-89ee-13d96ef26070" containerName="extract-content" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.763632 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f208fafa-4cf5-46f6-89ee-13d96ef26070" containerName="extract-content" Feb 23 19:05:34 crc kubenswrapper[4768]: E0223 19:05:34.763671 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de5a4703-0650-427d-a791-f9a3386ca413" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.763688 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="de5a4703-0650-427d-a791-f9a3386ca413" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 19:05:34 crc kubenswrapper[4768]: E0223 19:05:34.763722 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f208fafa-4cf5-46f6-89ee-13d96ef26070" containerName="extract-utilities" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.763732 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f208fafa-4cf5-46f6-89ee-13d96ef26070" containerName="extract-utilities" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.763997 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="de5a4703-0650-427d-a791-f9a3386ca413" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.764026 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f208fafa-4cf5-46f6-89ee-13d96ef26070" containerName="registry-server" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.764841 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.767278 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.767798 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.767883 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.768667 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.771946 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.788036 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc"] Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.872698 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.872812 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/76867435-2307-4032-a6ae-203f8009d08d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.872867 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.873089 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.873233 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lqr4\" (UniqueName: \"kubernetes.io/projected/76867435-2307-4032-a6ae-203f8009d08d-kube-api-access-2lqr4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.975904 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.975997 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/76867435-2307-4032-a6ae-203f8009d08d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.976071 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.976141 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.976215 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lqr4\" (UniqueName: \"kubernetes.io/projected/76867435-2307-4032-a6ae-203f8009d08d-kube-api-access-2lqr4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.977761 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/76867435-2307-4032-a6ae-203f8009d08d-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.982345 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.983382 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:34 crc kubenswrapper[4768]: I0223 19:05:34.988475 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:35 crc kubenswrapper[4768]: I0223 19:05:35.011649 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lqr4\" (UniqueName: \"kubernetes.io/projected/76867435-2307-4032-a6ae-203f8009d08d-kube-api-access-2lqr4\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bkgsc\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:35 crc kubenswrapper[4768]: I0223 19:05:35.094781 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:05:35 crc kubenswrapper[4768]: I0223 19:05:35.499120 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc"] Feb 23 19:05:35 crc kubenswrapper[4768]: I0223 19:05:35.643308 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" event={"ID":"76867435-2307-4032-a6ae-203f8009d08d","Type":"ContainerStarted","Data":"30d9869d79beab119885cd7241aa53d84527d29667a85d054696478a302504f9"} Feb 23 19:05:36 crc kubenswrapper[4768]: I0223 19:05:36.655095 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" event={"ID":"76867435-2307-4032-a6ae-203f8009d08d","Type":"ContainerStarted","Data":"16285051901b9c650f50980c81638abc2e030ae0f8b51c33f2a5dee4e2183566"} Feb 23 19:05:36 crc kubenswrapper[4768]: I0223 19:05:36.676631 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" podStartSLOduration=2.208226527 podStartE2EDuration="2.676603656s" podCreationTimestamp="2026-02-23 19:05:34 +0000 UTC" firstStartedPulling="2026-02-23 19:05:35.508001426 +0000 UTC m=+1930.898487236" lastFinishedPulling="2026-02-23 19:05:35.976378555 +0000 UTC m=+1931.366864365" observedRunningTime="2026-02-23 19:05:36.674284003 +0000 UTC m=+1932.064769833" watchObservedRunningTime="2026-02-23 19:05:36.676603656 +0000 UTC m=+1932.067089476" Feb 23 19:06:37 crc kubenswrapper[4768]: I0223 19:06:37.364166 4768 generic.go:334] "Generic (PLEG): container finished" podID="76867435-2307-4032-a6ae-203f8009d08d" containerID="16285051901b9c650f50980c81638abc2e030ae0f8b51c33f2a5dee4e2183566" exitCode=0 Feb 23 19:06:37 crc kubenswrapper[4768]: I0223 19:06:37.364294 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" event={"ID":"76867435-2307-4032-a6ae-203f8009d08d","Type":"ContainerDied","Data":"16285051901b9c650f50980c81638abc2e030ae0f8b51c33f2a5dee4e2183566"} Feb 23 19:06:38 crc kubenswrapper[4768]: I0223 19:06:38.923366 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:06:38 crc kubenswrapper[4768]: I0223 19:06:38.956402 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-ssh-key-openstack-edpm-ipam\") pod \"76867435-2307-4032-a6ae-203f8009d08d\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " Feb 23 19:06:38 crc kubenswrapper[4768]: I0223 19:06:38.956494 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-ovn-combined-ca-bundle\") pod \"76867435-2307-4032-a6ae-203f8009d08d\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " Feb 23 19:06:38 crc kubenswrapper[4768]: I0223 19:06:38.956561 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-inventory\") pod \"76867435-2307-4032-a6ae-203f8009d08d\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " Feb 23 19:06:38 crc kubenswrapper[4768]: I0223 19:06:38.956728 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lqr4\" (UniqueName: \"kubernetes.io/projected/76867435-2307-4032-a6ae-203f8009d08d-kube-api-access-2lqr4\") pod \"76867435-2307-4032-a6ae-203f8009d08d\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " Feb 23 19:06:38 crc kubenswrapper[4768]: I0223 19:06:38.956789 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/76867435-2307-4032-a6ae-203f8009d08d-ovncontroller-config-0\") pod \"76867435-2307-4032-a6ae-203f8009d08d\" (UID: \"76867435-2307-4032-a6ae-203f8009d08d\") " Feb 23 19:06:38 crc kubenswrapper[4768]: I0223 19:06:38.963794 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "76867435-2307-4032-a6ae-203f8009d08d" (UID: "76867435-2307-4032-a6ae-203f8009d08d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:06:38 crc kubenswrapper[4768]: I0223 19:06:38.964840 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76867435-2307-4032-a6ae-203f8009d08d-kube-api-access-2lqr4" (OuterVolumeSpecName: "kube-api-access-2lqr4") pod "76867435-2307-4032-a6ae-203f8009d08d" (UID: "76867435-2307-4032-a6ae-203f8009d08d"). InnerVolumeSpecName "kube-api-access-2lqr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:06:38 crc kubenswrapper[4768]: I0223 19:06:38.986053 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "76867435-2307-4032-a6ae-203f8009d08d" (UID: "76867435-2307-4032-a6ae-203f8009d08d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:06:38 crc kubenswrapper[4768]: I0223 19:06:38.989410 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-inventory" (OuterVolumeSpecName: "inventory") pod "76867435-2307-4032-a6ae-203f8009d08d" (UID: "76867435-2307-4032-a6ae-203f8009d08d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:06:38 crc kubenswrapper[4768]: I0223 19:06:38.991491 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76867435-2307-4032-a6ae-203f8009d08d-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "76867435-2307-4032-a6ae-203f8009d08d" (UID: "76867435-2307-4032-a6ae-203f8009d08d"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.059610 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lqr4\" (UniqueName: \"kubernetes.io/projected/76867435-2307-4032-a6ae-203f8009d08d-kube-api-access-2lqr4\") on node \"crc\" DevicePath \"\"" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.059660 4768 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/76867435-2307-4032-a6ae-203f8009d08d-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.059673 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.059686 4768 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.059699 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76867435-2307-4032-a6ae-203f8009d08d-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.396955 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" event={"ID":"76867435-2307-4032-a6ae-203f8009d08d","Type":"ContainerDied","Data":"30d9869d79beab119885cd7241aa53d84527d29667a85d054696478a302504f9"} Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.397041 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30d9869d79beab119885cd7241aa53d84527d29667a85d054696478a302504f9" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.397068 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bkgsc" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.505999 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz"] Feb 23 19:06:39 crc kubenswrapper[4768]: E0223 19:06:39.506521 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76867435-2307-4032-a6ae-203f8009d08d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.506544 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="76867435-2307-4032-a6ae-203f8009d08d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.506731 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="76867435-2307-4032-a6ae-203f8009d08d" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.507413 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.512701 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.512757 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.512941 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.513118 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.513236 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.513119 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.517477 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz"] Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.573790 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.574004 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbh2x\" (UniqueName: \"kubernetes.io/projected/8126924c-9f66-4df2-ac7c-eedcd34153b7-kube-api-access-cbh2x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.574153 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.574324 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.574417 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.574518 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.676865 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.676938 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.676981 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.677033 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.677096 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbh2x\" (UniqueName: \"kubernetes.io/projected/8126924c-9f66-4df2-ac7c-eedcd34153b7-kube-api-access-cbh2x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.677180 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.684280 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.690346 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.690601 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.690627 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.691029 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.701069 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbh2x\" (UniqueName: \"kubernetes.io/projected/8126924c-9f66-4df2-ac7c-eedcd34153b7-kube-api-access-cbh2x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:39 crc kubenswrapper[4768]: I0223 19:06:39.838515 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:06:40 crc kubenswrapper[4768]: I0223 19:06:40.450938 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz"] Feb 23 19:06:41 crc kubenswrapper[4768]: I0223 19:06:41.428018 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" event={"ID":"8126924c-9f66-4df2-ac7c-eedcd34153b7","Type":"ContainerStarted","Data":"7284c5eda46abef3350d7313998c3fbe3631a203f90a7030a71d18efb7821432"} Feb 23 19:06:41 crc kubenswrapper[4768]: I0223 19:06:41.428535 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" event={"ID":"8126924c-9f66-4df2-ac7c-eedcd34153b7","Type":"ContainerStarted","Data":"e36eaa43aaf8e806293c5165613512bf4f0605bafa268da40c20ffbb90c666c7"} Feb 23 19:07:09 crc kubenswrapper[4768]: I0223 19:07:09.545232 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:07:09 crc kubenswrapper[4768]: I0223 19:07:09.545796 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:07:27 crc kubenswrapper[4768]: I0223 19:07:27.965692 4768 generic.go:334] "Generic (PLEG): container finished" podID="8126924c-9f66-4df2-ac7c-eedcd34153b7" containerID="7284c5eda46abef3350d7313998c3fbe3631a203f90a7030a71d18efb7821432" exitCode=0 Feb 23 19:07:27 crc kubenswrapper[4768]: I0223 19:07:27.965757 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" event={"ID":"8126924c-9f66-4df2-ac7c-eedcd34153b7","Type":"ContainerDied","Data":"7284c5eda46abef3350d7313998c3fbe3631a203f90a7030a71d18efb7821432"} Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.398070 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.525445 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbh2x\" (UniqueName: \"kubernetes.io/projected/8126924c-9f66-4df2-ac7c-eedcd34153b7-kube-api-access-cbh2x\") pod \"8126924c-9f66-4df2-ac7c-eedcd34153b7\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.525535 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-neutron-metadata-combined-ca-bundle\") pod \"8126924c-9f66-4df2-ac7c-eedcd34153b7\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.525667 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-neutron-ovn-metadata-agent-neutron-config-0\") pod \"8126924c-9f66-4df2-ac7c-eedcd34153b7\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.525714 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-inventory\") pod \"8126924c-9f66-4df2-ac7c-eedcd34153b7\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.525758 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-nova-metadata-neutron-config-0\") pod \"8126924c-9f66-4df2-ac7c-eedcd34153b7\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.525825 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-ssh-key-openstack-edpm-ipam\") pod \"8126924c-9f66-4df2-ac7c-eedcd34153b7\" (UID: \"8126924c-9f66-4df2-ac7c-eedcd34153b7\") " Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.531052 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "8126924c-9f66-4df2-ac7c-eedcd34153b7" (UID: "8126924c-9f66-4df2-ac7c-eedcd34153b7"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.533573 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8126924c-9f66-4df2-ac7c-eedcd34153b7-kube-api-access-cbh2x" (OuterVolumeSpecName: "kube-api-access-cbh2x") pod "8126924c-9f66-4df2-ac7c-eedcd34153b7" (UID: "8126924c-9f66-4df2-ac7c-eedcd34153b7"). InnerVolumeSpecName "kube-api-access-cbh2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.553563 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "8126924c-9f66-4df2-ac7c-eedcd34153b7" (UID: "8126924c-9f66-4df2-ac7c-eedcd34153b7"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.554060 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8126924c-9f66-4df2-ac7c-eedcd34153b7" (UID: "8126924c-9f66-4df2-ac7c-eedcd34153b7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.557540 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-inventory" (OuterVolumeSpecName: "inventory") pod "8126924c-9f66-4df2-ac7c-eedcd34153b7" (UID: "8126924c-9f66-4df2-ac7c-eedcd34153b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.583310 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "8126924c-9f66-4df2-ac7c-eedcd34153b7" (UID: "8126924c-9f66-4df2-ac7c-eedcd34153b7"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.629289 4768 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.629426 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.629507 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbh2x\" (UniqueName: \"kubernetes.io/projected/8126924c-9f66-4df2-ac7c-eedcd34153b7-kube-api-access-cbh2x\") on node \"crc\" DevicePath \"\"" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.629571 4768 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.629626 4768 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.629689 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8126924c-9f66-4df2-ac7c-eedcd34153b7-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.991405 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" event={"ID":"8126924c-9f66-4df2-ac7c-eedcd34153b7","Type":"ContainerDied","Data":"e36eaa43aaf8e806293c5165613512bf4f0605bafa268da40c20ffbb90c666c7"} Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.991447 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz" Feb 23 19:07:29 crc kubenswrapper[4768]: I0223 19:07:29.991464 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e36eaa43aaf8e806293c5165613512bf4f0605bafa268da40c20ffbb90c666c7" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.098404 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w"] Feb 23 19:07:30 crc kubenswrapper[4768]: E0223 19:07:30.098971 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8126924c-9f66-4df2-ac7c-eedcd34153b7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.099000 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8126924c-9f66-4df2-ac7c-eedcd34153b7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.099212 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8126924c-9f66-4df2-ac7c-eedcd34153b7" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.099851 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.103626 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.104139 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.104406 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.104594 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.105911 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.127425 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w"] Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.240618 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wdsl\" (UniqueName: \"kubernetes.io/projected/e4de542c-566e-4b7a-a999-04b1219e40a6-kube-api-access-4wdsl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.241088 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.241130 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.241172 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.241198 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.342795 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.343759 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.343796 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.343852 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.343965 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wdsl\" (UniqueName: \"kubernetes.io/projected/e4de542c-566e-4b7a-a999-04b1219e40a6-kube-api-access-4wdsl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.348627 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.349193 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.351023 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.357804 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.358958 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wdsl\" (UniqueName: \"kubernetes.io/projected/e4de542c-566e-4b7a-a999-04b1219e40a6-kube-api-access-4wdsl\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:30 crc kubenswrapper[4768]: I0223 19:07:30.417146 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:07:31 crc kubenswrapper[4768]: I0223 19:07:31.062137 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w"] Feb 23 19:07:32 crc kubenswrapper[4768]: I0223 19:07:32.014615 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" event={"ID":"e4de542c-566e-4b7a-a999-04b1219e40a6","Type":"ContainerStarted","Data":"d98816c073363cc9af02c542c65ff598ab556c773f86adccf25d440fa810c1c9"} Feb 23 19:07:32 crc kubenswrapper[4768]: I0223 19:07:32.015315 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" event={"ID":"e4de542c-566e-4b7a-a999-04b1219e40a6","Type":"ContainerStarted","Data":"16ec11a4c55bcbb21d8a60f78b22cb562d6460705cd10986fcc6bc3811c43b28"} Feb 23 19:07:32 crc kubenswrapper[4768]: I0223 19:07:32.034064 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" podStartSLOduration=1.6354675250000001 podStartE2EDuration="2.034029037s" podCreationTimestamp="2026-02-23 19:07:30 +0000 UTC" firstStartedPulling="2026-02-23 19:07:31.063870021 +0000 UTC m=+2046.454355861" lastFinishedPulling="2026-02-23 19:07:31.462431533 +0000 UTC m=+2046.852917373" observedRunningTime="2026-02-23 19:07:32.032053573 +0000 UTC m=+2047.422539373" watchObservedRunningTime="2026-02-23 19:07:32.034029037 +0000 UTC m=+2047.424514837" Feb 23 19:07:39 crc kubenswrapper[4768]: I0223 19:07:39.545012 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:07:39 crc kubenswrapper[4768]: I0223 19:07:39.546010 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:08:09 crc kubenswrapper[4768]: I0223 19:08:09.545704 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:08:09 crc kubenswrapper[4768]: I0223 19:08:09.546721 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:08:09 crc kubenswrapper[4768]: I0223 19:08:09.546795 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 19:08:09 crc kubenswrapper[4768]: I0223 19:08:09.548008 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37b346db32eb6dfe9f95d37d0dec4a9f1b2f5b2115924d129b042376854bd97b"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 19:08:09 crc kubenswrapper[4768]: I0223 19:08:09.548115 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://37b346db32eb6dfe9f95d37d0dec4a9f1b2f5b2115924d129b042376854bd97b" gracePeriod=600 Feb 23 19:08:10 crc kubenswrapper[4768]: I0223 19:08:10.416923 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="37b346db32eb6dfe9f95d37d0dec4a9f1b2f5b2115924d129b042376854bd97b" exitCode=0 Feb 23 19:08:10 crc kubenswrapper[4768]: I0223 19:08:10.417002 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"37b346db32eb6dfe9f95d37d0dec4a9f1b2f5b2115924d129b042376854bd97b"} Feb 23 19:08:10 crc kubenswrapper[4768]: I0223 19:08:10.417615 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0"} Feb 23 19:08:10 crc kubenswrapper[4768]: I0223 19:08:10.417647 4768 scope.go:117] "RemoveContainer" containerID="08c9371b4553fcc6ee42a791f7bd0fdae6e57f175d3fd505efaa7e6359bbcb88" Feb 23 19:10:06 crc kubenswrapper[4768]: I0223 19:10:06.641637 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jscdq"] Feb 23 19:10:06 crc kubenswrapper[4768]: I0223 19:10:06.645199 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:06 crc kubenswrapper[4768]: I0223 19:10:06.661805 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jscdq"] Feb 23 19:10:06 crc kubenswrapper[4768]: I0223 19:10:06.674524 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a7181b-b68b-4030-80f2-2b4599a9413b-utilities\") pod \"redhat-operators-jscdq\" (UID: \"61a7181b-b68b-4030-80f2-2b4599a9413b\") " pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:06 crc kubenswrapper[4768]: I0223 19:10:06.674631 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a7181b-b68b-4030-80f2-2b4599a9413b-catalog-content\") pod \"redhat-operators-jscdq\" (UID: \"61a7181b-b68b-4030-80f2-2b4599a9413b\") " pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:06 crc kubenswrapper[4768]: I0223 19:10:06.674819 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtqqw\" (UniqueName: \"kubernetes.io/projected/61a7181b-b68b-4030-80f2-2b4599a9413b-kube-api-access-wtqqw\") pod \"redhat-operators-jscdq\" (UID: \"61a7181b-b68b-4030-80f2-2b4599a9413b\") " pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:06 crc kubenswrapper[4768]: I0223 19:10:06.776403 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtqqw\" (UniqueName: \"kubernetes.io/projected/61a7181b-b68b-4030-80f2-2b4599a9413b-kube-api-access-wtqqw\") pod \"redhat-operators-jscdq\" (UID: \"61a7181b-b68b-4030-80f2-2b4599a9413b\") " pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:06 crc kubenswrapper[4768]: I0223 19:10:06.776523 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a7181b-b68b-4030-80f2-2b4599a9413b-utilities\") pod \"redhat-operators-jscdq\" (UID: \"61a7181b-b68b-4030-80f2-2b4599a9413b\") " pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:06 crc kubenswrapper[4768]: I0223 19:10:06.776771 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a7181b-b68b-4030-80f2-2b4599a9413b-catalog-content\") pod \"redhat-operators-jscdq\" (UID: \"61a7181b-b68b-4030-80f2-2b4599a9413b\") " pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:06 crc kubenswrapper[4768]: I0223 19:10:06.777333 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a7181b-b68b-4030-80f2-2b4599a9413b-catalog-content\") pod \"redhat-operators-jscdq\" (UID: \"61a7181b-b68b-4030-80f2-2b4599a9413b\") " pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:06 crc kubenswrapper[4768]: I0223 19:10:06.777647 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a7181b-b68b-4030-80f2-2b4599a9413b-utilities\") pod \"redhat-operators-jscdq\" (UID: \"61a7181b-b68b-4030-80f2-2b4599a9413b\") " pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:06 crc kubenswrapper[4768]: I0223 19:10:06.802067 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtqqw\" (UniqueName: \"kubernetes.io/projected/61a7181b-b68b-4030-80f2-2b4599a9413b-kube-api-access-wtqqw\") pod \"redhat-operators-jscdq\" (UID: \"61a7181b-b68b-4030-80f2-2b4599a9413b\") " pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:06 crc kubenswrapper[4768]: I0223 19:10:06.980220 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:07 crc kubenswrapper[4768]: I0223 19:10:07.297722 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jscdq"] Feb 23 19:10:07 crc kubenswrapper[4768]: W0223 19:10:07.322559 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61a7181b_b68b_4030_80f2_2b4599a9413b.slice/crio-72fa7c5ea00135e8a8d7dbeb11dae4544bdfb34ee34e79291944d5cfebd781ef WatchSource:0}: Error finding container 72fa7c5ea00135e8a8d7dbeb11dae4544bdfb34ee34e79291944d5cfebd781ef: Status 404 returned error can't find the container with id 72fa7c5ea00135e8a8d7dbeb11dae4544bdfb34ee34e79291944d5cfebd781ef Feb 23 19:10:08 crc kubenswrapper[4768]: I0223 19:10:08.195058 4768 generic.go:334] "Generic (PLEG): container finished" podID="61a7181b-b68b-4030-80f2-2b4599a9413b" containerID="4e0cfe4bb0b6a63d851202f810053a3c391c6d87c7068912e638c89656e42d6b" exitCode=0 Feb 23 19:10:08 crc kubenswrapper[4768]: I0223 19:10:08.195109 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jscdq" event={"ID":"61a7181b-b68b-4030-80f2-2b4599a9413b","Type":"ContainerDied","Data":"4e0cfe4bb0b6a63d851202f810053a3c391c6d87c7068912e638c89656e42d6b"} Feb 23 19:10:08 crc kubenswrapper[4768]: I0223 19:10:08.195447 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jscdq" event={"ID":"61a7181b-b68b-4030-80f2-2b4599a9413b","Type":"ContainerStarted","Data":"72fa7c5ea00135e8a8d7dbeb11dae4544bdfb34ee34e79291944d5cfebd781ef"} Feb 23 19:10:08 crc kubenswrapper[4768]: I0223 19:10:08.197988 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 19:10:09 crc kubenswrapper[4768]: I0223 19:10:09.208739 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jscdq" event={"ID":"61a7181b-b68b-4030-80f2-2b4599a9413b","Type":"ContainerStarted","Data":"01efb1aae16f6c9a4cabfff498c1de5f0ac00c1b735076a4e426e948cfbdd435"} Feb 23 19:10:09 crc kubenswrapper[4768]: I0223 19:10:09.545241 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:10:09 crc kubenswrapper[4768]: I0223 19:10:09.545326 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:10:11 crc kubenswrapper[4768]: I0223 19:10:11.228907 4768 generic.go:334] "Generic (PLEG): container finished" podID="61a7181b-b68b-4030-80f2-2b4599a9413b" containerID="01efb1aae16f6c9a4cabfff498c1de5f0ac00c1b735076a4e426e948cfbdd435" exitCode=0 Feb 23 19:10:11 crc kubenswrapper[4768]: I0223 19:10:11.230606 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jscdq" event={"ID":"61a7181b-b68b-4030-80f2-2b4599a9413b","Type":"ContainerDied","Data":"01efb1aae16f6c9a4cabfff498c1de5f0ac00c1b735076a4e426e948cfbdd435"} Feb 23 19:10:13 crc kubenswrapper[4768]: I0223 19:10:13.255953 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jscdq" event={"ID":"61a7181b-b68b-4030-80f2-2b4599a9413b","Type":"ContainerStarted","Data":"1dffdc470fa943b95754c3b1494d907f9a40febf9f071d7ff982ad5e5b80c9ed"} Feb 23 19:10:13 crc kubenswrapper[4768]: I0223 19:10:13.286944 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jscdq" podStartSLOduration=3.1615719589999998 podStartE2EDuration="7.286925313s" podCreationTimestamp="2026-02-23 19:10:06 +0000 UTC" firstStartedPulling="2026-02-23 19:10:08.197691924 +0000 UTC m=+2203.588177734" lastFinishedPulling="2026-02-23 19:10:12.323045238 +0000 UTC m=+2207.713531088" observedRunningTime="2026-02-23 19:10:13.277382783 +0000 UTC m=+2208.667868593" watchObservedRunningTime="2026-02-23 19:10:13.286925313 +0000 UTC m=+2208.677411123" Feb 23 19:10:16 crc kubenswrapper[4768]: I0223 19:10:16.981427 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:16 crc kubenswrapper[4768]: I0223 19:10:16.981836 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:18 crc kubenswrapper[4768]: I0223 19:10:18.028812 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jscdq" podUID="61a7181b-b68b-4030-80f2-2b4599a9413b" containerName="registry-server" probeResult="failure" output=< Feb 23 19:10:18 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 23 19:10:18 crc kubenswrapper[4768]: > Feb 23 19:10:27 crc kubenswrapper[4768]: I0223 19:10:27.033137 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:27 crc kubenswrapper[4768]: I0223 19:10:27.084986 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:27 crc kubenswrapper[4768]: I0223 19:10:27.274568 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jscdq"] Feb 23 19:10:28 crc kubenswrapper[4768]: I0223 19:10:28.434520 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jscdq" podUID="61a7181b-b68b-4030-80f2-2b4599a9413b" containerName="registry-server" containerID="cri-o://1dffdc470fa943b95754c3b1494d907f9a40febf9f071d7ff982ad5e5b80c9ed" gracePeriod=2 Feb 23 19:10:28 crc kubenswrapper[4768]: I0223 19:10:28.988744 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.112751 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtqqw\" (UniqueName: \"kubernetes.io/projected/61a7181b-b68b-4030-80f2-2b4599a9413b-kube-api-access-wtqqw\") pod \"61a7181b-b68b-4030-80f2-2b4599a9413b\" (UID: \"61a7181b-b68b-4030-80f2-2b4599a9413b\") " Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.112801 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a7181b-b68b-4030-80f2-2b4599a9413b-catalog-content\") pod \"61a7181b-b68b-4030-80f2-2b4599a9413b\" (UID: \"61a7181b-b68b-4030-80f2-2b4599a9413b\") " Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.112856 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a7181b-b68b-4030-80f2-2b4599a9413b-utilities\") pod \"61a7181b-b68b-4030-80f2-2b4599a9413b\" (UID: \"61a7181b-b68b-4030-80f2-2b4599a9413b\") " Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.113905 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a7181b-b68b-4030-80f2-2b4599a9413b-utilities" (OuterVolumeSpecName: "utilities") pod "61a7181b-b68b-4030-80f2-2b4599a9413b" (UID: "61a7181b-b68b-4030-80f2-2b4599a9413b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.119693 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a7181b-b68b-4030-80f2-2b4599a9413b-kube-api-access-wtqqw" (OuterVolumeSpecName: "kube-api-access-wtqqw") pod "61a7181b-b68b-4030-80f2-2b4599a9413b" (UID: "61a7181b-b68b-4030-80f2-2b4599a9413b"). InnerVolumeSpecName "kube-api-access-wtqqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.215757 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtqqw\" (UniqueName: \"kubernetes.io/projected/61a7181b-b68b-4030-80f2-2b4599a9413b-kube-api-access-wtqqw\") on node \"crc\" DevicePath \"\"" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.215821 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a7181b-b68b-4030-80f2-2b4599a9413b-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.253781 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a7181b-b68b-4030-80f2-2b4599a9413b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61a7181b-b68b-4030-80f2-2b4599a9413b" (UID: "61a7181b-b68b-4030-80f2-2b4599a9413b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.317461 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a7181b-b68b-4030-80f2-2b4599a9413b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.448824 4768 generic.go:334] "Generic (PLEG): container finished" podID="61a7181b-b68b-4030-80f2-2b4599a9413b" containerID="1dffdc470fa943b95754c3b1494d907f9a40febf9f071d7ff982ad5e5b80c9ed" exitCode=0 Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.448869 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jscdq" event={"ID":"61a7181b-b68b-4030-80f2-2b4599a9413b","Type":"ContainerDied","Data":"1dffdc470fa943b95754c3b1494d907f9a40febf9f071d7ff982ad5e5b80c9ed"} Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.448907 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jscdq" event={"ID":"61a7181b-b68b-4030-80f2-2b4599a9413b","Type":"ContainerDied","Data":"72fa7c5ea00135e8a8d7dbeb11dae4544bdfb34ee34e79291944d5cfebd781ef"} Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.448937 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jscdq" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.448958 4768 scope.go:117] "RemoveContainer" containerID="1dffdc470fa943b95754c3b1494d907f9a40febf9f071d7ff982ad5e5b80c9ed" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.480055 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jscdq"] Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.487554 4768 scope.go:117] "RemoveContainer" containerID="01efb1aae16f6c9a4cabfff498c1de5f0ac00c1b735076a4e426e948cfbdd435" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.490686 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jscdq"] Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.541353 4768 scope.go:117] "RemoveContainer" containerID="4e0cfe4bb0b6a63d851202f810053a3c391c6d87c7068912e638c89656e42d6b" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.581260 4768 scope.go:117] "RemoveContainer" containerID="1dffdc470fa943b95754c3b1494d907f9a40febf9f071d7ff982ad5e5b80c9ed" Feb 23 19:10:29 crc kubenswrapper[4768]: E0223 19:10:29.581837 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dffdc470fa943b95754c3b1494d907f9a40febf9f071d7ff982ad5e5b80c9ed\": container with ID starting with 1dffdc470fa943b95754c3b1494d907f9a40febf9f071d7ff982ad5e5b80c9ed not found: ID does not exist" containerID="1dffdc470fa943b95754c3b1494d907f9a40febf9f071d7ff982ad5e5b80c9ed" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.581914 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dffdc470fa943b95754c3b1494d907f9a40febf9f071d7ff982ad5e5b80c9ed"} err="failed to get container status \"1dffdc470fa943b95754c3b1494d907f9a40febf9f071d7ff982ad5e5b80c9ed\": rpc error: code = NotFound desc = could not find container \"1dffdc470fa943b95754c3b1494d907f9a40febf9f071d7ff982ad5e5b80c9ed\": container with ID starting with 1dffdc470fa943b95754c3b1494d907f9a40febf9f071d7ff982ad5e5b80c9ed not found: ID does not exist" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.581963 4768 scope.go:117] "RemoveContainer" containerID="01efb1aae16f6c9a4cabfff498c1de5f0ac00c1b735076a4e426e948cfbdd435" Feb 23 19:10:29 crc kubenswrapper[4768]: E0223 19:10:29.582546 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01efb1aae16f6c9a4cabfff498c1de5f0ac00c1b735076a4e426e948cfbdd435\": container with ID starting with 01efb1aae16f6c9a4cabfff498c1de5f0ac00c1b735076a4e426e948cfbdd435 not found: ID does not exist" containerID="01efb1aae16f6c9a4cabfff498c1de5f0ac00c1b735076a4e426e948cfbdd435" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.582593 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01efb1aae16f6c9a4cabfff498c1de5f0ac00c1b735076a4e426e948cfbdd435"} err="failed to get container status \"01efb1aae16f6c9a4cabfff498c1de5f0ac00c1b735076a4e426e948cfbdd435\": rpc error: code = NotFound desc = could not find container \"01efb1aae16f6c9a4cabfff498c1de5f0ac00c1b735076a4e426e948cfbdd435\": container with ID starting with 01efb1aae16f6c9a4cabfff498c1de5f0ac00c1b735076a4e426e948cfbdd435 not found: ID does not exist" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.582630 4768 scope.go:117] "RemoveContainer" containerID="4e0cfe4bb0b6a63d851202f810053a3c391c6d87c7068912e638c89656e42d6b" Feb 23 19:10:29 crc kubenswrapper[4768]: E0223 19:10:29.582996 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e0cfe4bb0b6a63d851202f810053a3c391c6d87c7068912e638c89656e42d6b\": container with ID starting with 4e0cfe4bb0b6a63d851202f810053a3c391c6d87c7068912e638c89656e42d6b not found: ID does not exist" containerID="4e0cfe4bb0b6a63d851202f810053a3c391c6d87c7068912e638c89656e42d6b" Feb 23 19:10:29 crc kubenswrapper[4768]: I0223 19:10:29.583073 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e0cfe4bb0b6a63d851202f810053a3c391c6d87c7068912e638c89656e42d6b"} err="failed to get container status \"4e0cfe4bb0b6a63d851202f810053a3c391c6d87c7068912e638c89656e42d6b\": rpc error: code = NotFound desc = could not find container \"4e0cfe4bb0b6a63d851202f810053a3c391c6d87c7068912e638c89656e42d6b\": container with ID starting with 4e0cfe4bb0b6a63d851202f810053a3c391c6d87c7068912e638c89656e42d6b not found: ID does not exist" Feb 23 19:10:31 crc kubenswrapper[4768]: I0223 19:10:31.334852 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61a7181b-b68b-4030-80f2-2b4599a9413b" path="/var/lib/kubelet/pods/61a7181b-b68b-4030-80f2-2b4599a9413b/volumes" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.500143 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jxsnc"] Feb 23 19:10:37 crc kubenswrapper[4768]: E0223 19:10:37.506406 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7181b-b68b-4030-80f2-2b4599a9413b" containerName="extract-content" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.506434 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7181b-b68b-4030-80f2-2b4599a9413b" containerName="extract-content" Feb 23 19:10:37 crc kubenswrapper[4768]: E0223 19:10:37.506461 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7181b-b68b-4030-80f2-2b4599a9413b" containerName="registry-server" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.506472 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7181b-b68b-4030-80f2-2b4599a9413b" containerName="registry-server" Feb 23 19:10:37 crc kubenswrapper[4768]: E0223 19:10:37.506515 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a7181b-b68b-4030-80f2-2b4599a9413b" containerName="extract-utilities" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.506525 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a7181b-b68b-4030-80f2-2b4599a9413b" containerName="extract-utilities" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.506843 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a7181b-b68b-4030-80f2-2b4599a9413b" containerName="registry-server" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.509063 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.512834 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxsnc"] Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.524936 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e42a14-43cd-46da-9c2e-db930ae41022-catalog-content\") pod \"redhat-marketplace-jxsnc\" (UID: \"46e42a14-43cd-46da-9c2e-db930ae41022\") " pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.525014 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e42a14-43cd-46da-9c2e-db930ae41022-utilities\") pod \"redhat-marketplace-jxsnc\" (UID: \"46e42a14-43cd-46da-9c2e-db930ae41022\") " pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.525113 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t8pg\" (UniqueName: \"kubernetes.io/projected/46e42a14-43cd-46da-9c2e-db930ae41022-kube-api-access-8t8pg\") pod \"redhat-marketplace-jxsnc\" (UID: \"46e42a14-43cd-46da-9c2e-db930ae41022\") " pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.627470 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t8pg\" (UniqueName: \"kubernetes.io/projected/46e42a14-43cd-46da-9c2e-db930ae41022-kube-api-access-8t8pg\") pod \"redhat-marketplace-jxsnc\" (UID: \"46e42a14-43cd-46da-9c2e-db930ae41022\") " pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.627636 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e42a14-43cd-46da-9c2e-db930ae41022-catalog-content\") pod \"redhat-marketplace-jxsnc\" (UID: \"46e42a14-43cd-46da-9c2e-db930ae41022\") " pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.627769 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e42a14-43cd-46da-9c2e-db930ae41022-utilities\") pod \"redhat-marketplace-jxsnc\" (UID: \"46e42a14-43cd-46da-9c2e-db930ae41022\") " pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.628745 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e42a14-43cd-46da-9c2e-db930ae41022-catalog-content\") pod \"redhat-marketplace-jxsnc\" (UID: \"46e42a14-43cd-46da-9c2e-db930ae41022\") " pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.628971 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e42a14-43cd-46da-9c2e-db930ae41022-utilities\") pod \"redhat-marketplace-jxsnc\" (UID: \"46e42a14-43cd-46da-9c2e-db930ae41022\") " pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.659328 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t8pg\" (UniqueName: \"kubernetes.io/projected/46e42a14-43cd-46da-9c2e-db930ae41022-kube-api-access-8t8pg\") pod \"redhat-marketplace-jxsnc\" (UID: \"46e42a14-43cd-46da-9c2e-db930ae41022\") " pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:37 crc kubenswrapper[4768]: I0223 19:10:37.853776 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:38 crc kubenswrapper[4768]: W0223 19:10:38.352375 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46e42a14_43cd_46da_9c2e_db930ae41022.slice/crio-b98fd778e84fefbe4ef27eac1db25a3a8c1d40d1af052645a83952a634993479 WatchSource:0}: Error finding container b98fd778e84fefbe4ef27eac1db25a3a8c1d40d1af052645a83952a634993479: Status 404 returned error can't find the container with id b98fd778e84fefbe4ef27eac1db25a3a8c1d40d1af052645a83952a634993479 Feb 23 19:10:38 crc kubenswrapper[4768]: I0223 19:10:38.367379 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxsnc"] Feb 23 19:10:38 crc kubenswrapper[4768]: I0223 19:10:38.567634 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxsnc" event={"ID":"46e42a14-43cd-46da-9c2e-db930ae41022","Type":"ContainerStarted","Data":"b98fd778e84fefbe4ef27eac1db25a3a8c1d40d1af052645a83952a634993479"} Feb 23 19:10:39 crc kubenswrapper[4768]: I0223 19:10:39.546379 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:10:39 crc kubenswrapper[4768]: I0223 19:10:39.546464 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:10:39 crc kubenswrapper[4768]: I0223 19:10:39.587383 4768 generic.go:334] "Generic (PLEG): container finished" podID="46e42a14-43cd-46da-9c2e-db930ae41022" containerID="9a0f084b1782bde8e4f087680eb8996c269f19c94653da827e7ad9e77c51380d" exitCode=0 Feb 23 19:10:39 crc kubenswrapper[4768]: I0223 19:10:39.587454 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxsnc" event={"ID":"46e42a14-43cd-46da-9c2e-db930ae41022","Type":"ContainerDied","Data":"9a0f084b1782bde8e4f087680eb8996c269f19c94653da827e7ad9e77c51380d"} Feb 23 19:10:40 crc kubenswrapper[4768]: I0223 19:10:40.601937 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxsnc" event={"ID":"46e42a14-43cd-46da-9c2e-db930ae41022","Type":"ContainerStarted","Data":"3b0df544d6a92fd4214d60e2e135ebbc548d54acbe20dd6ef461f0c04edd4314"} Feb 23 19:10:41 crc kubenswrapper[4768]: I0223 19:10:41.614621 4768 generic.go:334] "Generic (PLEG): container finished" podID="46e42a14-43cd-46da-9c2e-db930ae41022" containerID="3b0df544d6a92fd4214d60e2e135ebbc548d54acbe20dd6ef461f0c04edd4314" exitCode=0 Feb 23 19:10:41 crc kubenswrapper[4768]: I0223 19:10:41.614679 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxsnc" event={"ID":"46e42a14-43cd-46da-9c2e-db930ae41022","Type":"ContainerDied","Data":"3b0df544d6a92fd4214d60e2e135ebbc548d54acbe20dd6ef461f0c04edd4314"} Feb 23 19:10:42 crc kubenswrapper[4768]: I0223 19:10:42.628893 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxsnc" event={"ID":"46e42a14-43cd-46da-9c2e-db930ae41022","Type":"ContainerStarted","Data":"81b382183ab910c941c4da99f923377a14bfdba76117e799db36561c1c0ff7c2"} Feb 23 19:10:42 crc kubenswrapper[4768]: I0223 19:10:42.650663 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jxsnc" podStartSLOduration=3.124521909 podStartE2EDuration="5.650634075s" podCreationTimestamp="2026-02-23 19:10:37 +0000 UTC" firstStartedPulling="2026-02-23 19:10:39.590056442 +0000 UTC m=+2234.980542252" lastFinishedPulling="2026-02-23 19:10:42.116168578 +0000 UTC m=+2237.506654418" observedRunningTime="2026-02-23 19:10:42.648224699 +0000 UTC m=+2238.038710579" watchObservedRunningTime="2026-02-23 19:10:42.650634075 +0000 UTC m=+2238.041119875" Feb 23 19:10:47 crc kubenswrapper[4768]: I0223 19:10:47.854279 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:47 crc kubenswrapper[4768]: I0223 19:10:47.854938 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:47 crc kubenswrapper[4768]: I0223 19:10:47.932714 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:48 crc kubenswrapper[4768]: I0223 19:10:48.768404 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:48 crc kubenswrapper[4768]: I0223 19:10:48.843122 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxsnc"] Feb 23 19:10:50 crc kubenswrapper[4768]: I0223 19:10:50.715138 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jxsnc" podUID="46e42a14-43cd-46da-9c2e-db930ae41022" containerName="registry-server" containerID="cri-o://81b382183ab910c941c4da99f923377a14bfdba76117e799db36561c1c0ff7c2" gracePeriod=2 Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.211323 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.298854 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t8pg\" (UniqueName: \"kubernetes.io/projected/46e42a14-43cd-46da-9c2e-db930ae41022-kube-api-access-8t8pg\") pod \"46e42a14-43cd-46da-9c2e-db930ae41022\" (UID: \"46e42a14-43cd-46da-9c2e-db930ae41022\") " Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.298925 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e42a14-43cd-46da-9c2e-db930ae41022-utilities\") pod \"46e42a14-43cd-46da-9c2e-db930ae41022\" (UID: \"46e42a14-43cd-46da-9c2e-db930ae41022\") " Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.298969 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e42a14-43cd-46da-9c2e-db930ae41022-catalog-content\") pod \"46e42a14-43cd-46da-9c2e-db930ae41022\" (UID: \"46e42a14-43cd-46da-9c2e-db930ae41022\") " Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.301411 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e42a14-43cd-46da-9c2e-db930ae41022-utilities" (OuterVolumeSpecName: "utilities") pod "46e42a14-43cd-46da-9c2e-db930ae41022" (UID: "46e42a14-43cd-46da-9c2e-db930ae41022"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.310966 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e42a14-43cd-46da-9c2e-db930ae41022-kube-api-access-8t8pg" (OuterVolumeSpecName: "kube-api-access-8t8pg") pod "46e42a14-43cd-46da-9c2e-db930ae41022" (UID: "46e42a14-43cd-46da-9c2e-db930ae41022"). InnerVolumeSpecName "kube-api-access-8t8pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.335404 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e42a14-43cd-46da-9c2e-db930ae41022-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46e42a14-43cd-46da-9c2e-db930ae41022" (UID: "46e42a14-43cd-46da-9c2e-db930ae41022"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.403410 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e42a14-43cd-46da-9c2e-db930ae41022-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.403449 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e42a14-43cd-46da-9c2e-db930ae41022-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.403465 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t8pg\" (UniqueName: \"kubernetes.io/projected/46e42a14-43cd-46da-9c2e-db930ae41022-kube-api-access-8t8pg\") on node \"crc\" DevicePath \"\"" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.730724 4768 generic.go:334] "Generic (PLEG): container finished" podID="46e42a14-43cd-46da-9c2e-db930ae41022" containerID="81b382183ab910c941c4da99f923377a14bfdba76117e799db36561c1c0ff7c2" exitCode=0 Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.730774 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxsnc" event={"ID":"46e42a14-43cd-46da-9c2e-db930ae41022","Type":"ContainerDied","Data":"81b382183ab910c941c4da99f923377a14bfdba76117e799db36561c1c0ff7c2"} Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.730802 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jxsnc" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.730833 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jxsnc" event={"ID":"46e42a14-43cd-46da-9c2e-db930ae41022","Type":"ContainerDied","Data":"b98fd778e84fefbe4ef27eac1db25a3a8c1d40d1af052645a83952a634993479"} Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.730862 4768 scope.go:117] "RemoveContainer" containerID="81b382183ab910c941c4da99f923377a14bfdba76117e799db36561c1c0ff7c2" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.772343 4768 scope.go:117] "RemoveContainer" containerID="3b0df544d6a92fd4214d60e2e135ebbc548d54acbe20dd6ef461f0c04edd4314" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.781055 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxsnc"] Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.800859 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jxsnc"] Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.806382 4768 scope.go:117] "RemoveContainer" containerID="9a0f084b1782bde8e4f087680eb8996c269f19c94653da827e7ad9e77c51380d" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.844836 4768 scope.go:117] "RemoveContainer" containerID="81b382183ab910c941c4da99f923377a14bfdba76117e799db36561c1c0ff7c2" Feb 23 19:10:51 crc kubenswrapper[4768]: E0223 19:10:51.846399 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81b382183ab910c941c4da99f923377a14bfdba76117e799db36561c1c0ff7c2\": container with ID starting with 81b382183ab910c941c4da99f923377a14bfdba76117e799db36561c1c0ff7c2 not found: ID does not exist" containerID="81b382183ab910c941c4da99f923377a14bfdba76117e799db36561c1c0ff7c2" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.846648 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b382183ab910c941c4da99f923377a14bfdba76117e799db36561c1c0ff7c2"} err="failed to get container status \"81b382183ab910c941c4da99f923377a14bfdba76117e799db36561c1c0ff7c2\": rpc error: code = NotFound desc = could not find container \"81b382183ab910c941c4da99f923377a14bfdba76117e799db36561c1c0ff7c2\": container with ID starting with 81b382183ab910c941c4da99f923377a14bfdba76117e799db36561c1c0ff7c2 not found: ID does not exist" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.846691 4768 scope.go:117] "RemoveContainer" containerID="3b0df544d6a92fd4214d60e2e135ebbc548d54acbe20dd6ef461f0c04edd4314" Feb 23 19:10:51 crc kubenswrapper[4768]: E0223 19:10:51.847763 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b0df544d6a92fd4214d60e2e135ebbc548d54acbe20dd6ef461f0c04edd4314\": container with ID starting with 3b0df544d6a92fd4214d60e2e135ebbc548d54acbe20dd6ef461f0c04edd4314 not found: ID does not exist" containerID="3b0df544d6a92fd4214d60e2e135ebbc548d54acbe20dd6ef461f0c04edd4314" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.847799 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b0df544d6a92fd4214d60e2e135ebbc548d54acbe20dd6ef461f0c04edd4314"} err="failed to get container status \"3b0df544d6a92fd4214d60e2e135ebbc548d54acbe20dd6ef461f0c04edd4314\": rpc error: code = NotFound desc = could not find container \"3b0df544d6a92fd4214d60e2e135ebbc548d54acbe20dd6ef461f0c04edd4314\": container with ID starting with 3b0df544d6a92fd4214d60e2e135ebbc548d54acbe20dd6ef461f0c04edd4314 not found: ID does not exist" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.847820 4768 scope.go:117] "RemoveContainer" containerID="9a0f084b1782bde8e4f087680eb8996c269f19c94653da827e7ad9e77c51380d" Feb 23 19:10:51 crc kubenswrapper[4768]: E0223 19:10:51.848107 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a0f084b1782bde8e4f087680eb8996c269f19c94653da827e7ad9e77c51380d\": container with ID starting with 9a0f084b1782bde8e4f087680eb8996c269f19c94653da827e7ad9e77c51380d not found: ID does not exist" containerID="9a0f084b1782bde8e4f087680eb8996c269f19c94653da827e7ad9e77c51380d" Feb 23 19:10:51 crc kubenswrapper[4768]: I0223 19:10:51.848212 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a0f084b1782bde8e4f087680eb8996c269f19c94653da827e7ad9e77c51380d"} err="failed to get container status \"9a0f084b1782bde8e4f087680eb8996c269f19c94653da827e7ad9e77c51380d\": rpc error: code = NotFound desc = could not find container \"9a0f084b1782bde8e4f087680eb8996c269f19c94653da827e7ad9e77c51380d\": container with ID starting with 9a0f084b1782bde8e4f087680eb8996c269f19c94653da827e7ad9e77c51380d not found: ID does not exist" Feb 23 19:10:53 crc kubenswrapper[4768]: I0223 19:10:53.329681 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e42a14-43cd-46da-9c2e-db930ae41022" path="/var/lib/kubelet/pods/46e42a14-43cd-46da-9c2e-db930ae41022/volumes" Feb 23 19:11:09 crc kubenswrapper[4768]: I0223 19:11:09.546236 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:11:09 crc kubenswrapper[4768]: I0223 19:11:09.547277 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:11:09 crc kubenswrapper[4768]: I0223 19:11:09.547349 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 19:11:09 crc kubenswrapper[4768]: I0223 19:11:09.548985 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 19:11:09 crc kubenswrapper[4768]: I0223 19:11:09.549065 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" gracePeriod=600 Feb 23 19:11:09 crc kubenswrapper[4768]: E0223 19:11:09.678435 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:11:09 crc kubenswrapper[4768]: I0223 19:11:09.933091 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" exitCode=0 Feb 23 19:11:09 crc kubenswrapper[4768]: I0223 19:11:09.933218 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0"} Feb 23 19:11:09 crc kubenswrapper[4768]: I0223 19:11:09.933679 4768 scope.go:117] "RemoveContainer" containerID="37b346db32eb6dfe9f95d37d0dec4a9f1b2f5b2115924d129b042376854bd97b" Feb 23 19:11:09 crc kubenswrapper[4768]: I0223 19:11:09.934677 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:11:09 crc kubenswrapper[4768]: E0223 19:11:09.935159 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:11:22 crc kubenswrapper[4768]: I0223 19:11:22.308060 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:11:22 crc kubenswrapper[4768]: E0223 19:11:22.309195 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:11:28 crc kubenswrapper[4768]: I0223 19:11:28.140639 4768 generic.go:334] "Generic (PLEG): container finished" podID="e4de542c-566e-4b7a-a999-04b1219e40a6" containerID="d98816c073363cc9af02c542c65ff598ab556c773f86adccf25d440fa810c1c9" exitCode=0 Feb 23 19:11:28 crc kubenswrapper[4768]: I0223 19:11:28.140741 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" event={"ID":"e4de542c-566e-4b7a-a999-04b1219e40a6","Type":"ContainerDied","Data":"d98816c073363cc9af02c542c65ff598ab556c773f86adccf25d440fa810c1c9"} Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.605527 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.712606 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-libvirt-combined-ca-bundle\") pod \"e4de542c-566e-4b7a-a999-04b1219e40a6\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.712666 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wdsl\" (UniqueName: \"kubernetes.io/projected/e4de542c-566e-4b7a-a999-04b1219e40a6-kube-api-access-4wdsl\") pod \"e4de542c-566e-4b7a-a999-04b1219e40a6\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.712724 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-inventory\") pod \"e4de542c-566e-4b7a-a999-04b1219e40a6\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.712853 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-libvirt-secret-0\") pod \"e4de542c-566e-4b7a-a999-04b1219e40a6\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.713014 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-ssh-key-openstack-edpm-ipam\") pod \"e4de542c-566e-4b7a-a999-04b1219e40a6\" (UID: \"e4de542c-566e-4b7a-a999-04b1219e40a6\") " Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.719557 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4de542c-566e-4b7a-a999-04b1219e40a6-kube-api-access-4wdsl" (OuterVolumeSpecName: "kube-api-access-4wdsl") pod "e4de542c-566e-4b7a-a999-04b1219e40a6" (UID: "e4de542c-566e-4b7a-a999-04b1219e40a6"). InnerVolumeSpecName "kube-api-access-4wdsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.720625 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e4de542c-566e-4b7a-a999-04b1219e40a6" (UID: "e4de542c-566e-4b7a-a999-04b1219e40a6"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.739677 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-inventory" (OuterVolumeSpecName: "inventory") pod "e4de542c-566e-4b7a-a999-04b1219e40a6" (UID: "e4de542c-566e-4b7a-a999-04b1219e40a6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.740804 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "e4de542c-566e-4b7a-a999-04b1219e40a6" (UID: "e4de542c-566e-4b7a-a999-04b1219e40a6"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.763888 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e4de542c-566e-4b7a-a999-04b1219e40a6" (UID: "e4de542c-566e-4b7a-a999-04b1219e40a6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.816093 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.816158 4768 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.816179 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.816200 4768 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4de542c-566e-4b7a-a999-04b1219e40a6-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:11:29 crc kubenswrapper[4768]: I0223 19:11:29.816213 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wdsl\" (UniqueName: \"kubernetes.io/projected/e4de542c-566e-4b7a-a999-04b1219e40a6-kube-api-access-4wdsl\") on node \"crc\" DevicePath \"\"" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.159022 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" event={"ID":"e4de542c-566e-4b7a-a999-04b1219e40a6","Type":"ContainerDied","Data":"16ec11a4c55bcbb21d8a60f78b22cb562d6460705cd10986fcc6bc3811c43b28"} Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.159069 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ec11a4c55bcbb21d8a60f78b22cb562d6460705cd10986fcc6bc3811c43b28" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.159128 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.336872 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf"] Feb 23 19:11:30 crc kubenswrapper[4768]: E0223 19:11:30.337275 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4de542c-566e-4b7a-a999-04b1219e40a6" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.337286 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4de542c-566e-4b7a-a999-04b1219e40a6" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 19:11:30 crc kubenswrapper[4768]: E0223 19:11:30.337309 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e42a14-43cd-46da-9c2e-db930ae41022" containerName="registry-server" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.337315 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e42a14-43cd-46da-9c2e-db930ae41022" containerName="registry-server" Feb 23 19:11:30 crc kubenswrapper[4768]: E0223 19:11:30.337332 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e42a14-43cd-46da-9c2e-db930ae41022" containerName="extract-utilities" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.337338 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e42a14-43cd-46da-9c2e-db930ae41022" containerName="extract-utilities" Feb 23 19:11:30 crc kubenswrapper[4768]: E0223 19:11:30.337349 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e42a14-43cd-46da-9c2e-db930ae41022" containerName="extract-content" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.337355 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e42a14-43cd-46da-9c2e-db930ae41022" containerName="extract-content" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.337516 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4de542c-566e-4b7a-a999-04b1219e40a6" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.337540 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e42a14-43cd-46da-9c2e-db930ae41022" containerName="registry-server" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.338138 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.343624 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.343717 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.343631 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.343969 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.344015 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.344045 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.344105 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.353535 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf"] Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.532133 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.532831 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.532928 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.532965 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.532990 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.533026 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.533120 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.533160 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q6n5\" (UniqueName: \"kubernetes.io/projected/4a3528f8-0776-47bf-81fa-c7bd1698938b-kube-api-access-2q6n5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.533231 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.533300 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.533342 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.635402 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.635458 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q6n5\" (UniqueName: \"kubernetes.io/projected/4a3528f8-0776-47bf-81fa-c7bd1698938b-kube-api-access-2q6n5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.635514 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.635543 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.635572 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.635594 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.635619 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.635686 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.635714 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.635735 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.635752 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.636455 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.640550 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.640695 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.640738 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.641813 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.642139 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.642274 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.646566 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.646824 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.650111 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.652911 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q6n5\" (UniqueName: \"kubernetes.io/projected/4a3528f8-0776-47bf-81fa-c7bd1698938b-kube-api-access-2q6n5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-f79cf\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:30 crc kubenswrapper[4768]: I0223 19:11:30.655781 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:11:31 crc kubenswrapper[4768]: I0223 19:11:31.247653 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf"] Feb 23 19:11:31 crc kubenswrapper[4768]: W0223 19:11:31.250803 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a3528f8_0776_47bf_81fa_c7bd1698938b.slice/crio-e48fd12d9624a617c29f2086411df63aa988dc0ee58037a69391932e49e4dff6 WatchSource:0}: Error finding container e48fd12d9624a617c29f2086411df63aa988dc0ee58037a69391932e49e4dff6: Status 404 returned error can't find the container with id e48fd12d9624a617c29f2086411df63aa988dc0ee58037a69391932e49e4dff6 Feb 23 19:11:32 crc kubenswrapper[4768]: I0223 19:11:32.193858 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" event={"ID":"4a3528f8-0776-47bf-81fa-c7bd1698938b","Type":"ContainerStarted","Data":"da1a2f7ec09c10e4e9a45faba397257d86cd931e9a610c99443135a220fc0a5d"} Feb 23 19:11:32 crc kubenswrapper[4768]: I0223 19:11:32.193922 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" event={"ID":"4a3528f8-0776-47bf-81fa-c7bd1698938b","Type":"ContainerStarted","Data":"e48fd12d9624a617c29f2086411df63aa988dc0ee58037a69391932e49e4dff6"} Feb 23 19:11:32 crc kubenswrapper[4768]: I0223 19:11:32.223874 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" podStartSLOduration=1.750019618 podStartE2EDuration="2.223851805s" podCreationTimestamp="2026-02-23 19:11:30 +0000 UTC" firstStartedPulling="2026-02-23 19:11:31.253546764 +0000 UTC m=+2286.644032574" lastFinishedPulling="2026-02-23 19:11:31.727378951 +0000 UTC m=+2287.117864761" observedRunningTime="2026-02-23 19:11:32.214715805 +0000 UTC m=+2287.605201625" watchObservedRunningTime="2026-02-23 19:11:32.223851805 +0000 UTC m=+2287.614337595" Feb 23 19:11:36 crc kubenswrapper[4768]: I0223 19:11:36.308345 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:11:36 crc kubenswrapper[4768]: E0223 19:11:36.309171 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:11:48 crc kubenswrapper[4768]: I0223 19:11:48.308608 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:11:48 crc kubenswrapper[4768]: E0223 19:11:48.309500 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:11:59 crc kubenswrapper[4768]: I0223 19:11:59.308131 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:11:59 crc kubenswrapper[4768]: E0223 19:11:59.308931 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:12:14 crc kubenswrapper[4768]: I0223 19:12:14.308478 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:12:14 crc kubenswrapper[4768]: E0223 19:12:14.309580 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:12:29 crc kubenswrapper[4768]: I0223 19:12:29.307850 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:12:29 crc kubenswrapper[4768]: E0223 19:12:29.308792 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:12:42 crc kubenswrapper[4768]: I0223 19:12:42.307748 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:12:42 crc kubenswrapper[4768]: E0223 19:12:42.308644 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:12:53 crc kubenswrapper[4768]: I0223 19:12:53.308814 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:12:53 crc kubenswrapper[4768]: E0223 19:12:53.310312 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:12:59 crc kubenswrapper[4768]: I0223 19:12:59.662489 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7bxrl"] Feb 23 19:12:59 crc kubenswrapper[4768]: I0223 19:12:59.666656 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:12:59 crc kubenswrapper[4768]: I0223 19:12:59.688201 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7bxrl"] Feb 23 19:12:59 crc kubenswrapper[4768]: I0223 19:12:59.778688 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10d410ea-6d34-4675-b042-2ce9ee57a9bc-catalog-content\") pod \"community-operators-7bxrl\" (UID: \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\") " pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:12:59 crc kubenswrapper[4768]: I0223 19:12:59.779073 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj5ls\" (UniqueName: \"kubernetes.io/projected/10d410ea-6d34-4675-b042-2ce9ee57a9bc-kube-api-access-sj5ls\") pod \"community-operators-7bxrl\" (UID: \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\") " pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:12:59 crc kubenswrapper[4768]: I0223 19:12:59.779356 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10d410ea-6d34-4675-b042-2ce9ee57a9bc-utilities\") pod \"community-operators-7bxrl\" (UID: \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\") " pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:12:59 crc kubenswrapper[4768]: I0223 19:12:59.881162 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10d410ea-6d34-4675-b042-2ce9ee57a9bc-utilities\") pod \"community-operators-7bxrl\" (UID: \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\") " pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:12:59 crc kubenswrapper[4768]: I0223 19:12:59.881274 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10d410ea-6d34-4675-b042-2ce9ee57a9bc-catalog-content\") pod \"community-operators-7bxrl\" (UID: \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\") " pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:12:59 crc kubenswrapper[4768]: I0223 19:12:59.881362 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj5ls\" (UniqueName: \"kubernetes.io/projected/10d410ea-6d34-4675-b042-2ce9ee57a9bc-kube-api-access-sj5ls\") pod \"community-operators-7bxrl\" (UID: \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\") " pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:12:59 crc kubenswrapper[4768]: I0223 19:12:59.881926 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10d410ea-6d34-4675-b042-2ce9ee57a9bc-catalog-content\") pod \"community-operators-7bxrl\" (UID: \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\") " pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:12:59 crc kubenswrapper[4768]: I0223 19:12:59.882095 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10d410ea-6d34-4675-b042-2ce9ee57a9bc-utilities\") pod \"community-operators-7bxrl\" (UID: \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\") " pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:12:59 crc kubenswrapper[4768]: I0223 19:12:59.910462 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj5ls\" (UniqueName: \"kubernetes.io/projected/10d410ea-6d34-4675-b042-2ce9ee57a9bc-kube-api-access-sj5ls\") pod \"community-operators-7bxrl\" (UID: \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\") " pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:12:59 crc kubenswrapper[4768]: I0223 19:12:59.995081 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:13:00 crc kubenswrapper[4768]: I0223 19:13:00.579209 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7bxrl"] Feb 23 19:13:01 crc kubenswrapper[4768]: I0223 19:13:01.161279 4768 generic.go:334] "Generic (PLEG): container finished" podID="10d410ea-6d34-4675-b042-2ce9ee57a9bc" containerID="bc74babfe8fd2ecc6765654b0d92c23f85bb119ccff2383470110932d72ffc6d" exitCode=0 Feb 23 19:13:01 crc kubenswrapper[4768]: I0223 19:13:01.161627 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bxrl" event={"ID":"10d410ea-6d34-4675-b042-2ce9ee57a9bc","Type":"ContainerDied","Data":"bc74babfe8fd2ecc6765654b0d92c23f85bb119ccff2383470110932d72ffc6d"} Feb 23 19:13:01 crc kubenswrapper[4768]: I0223 19:13:01.161672 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bxrl" event={"ID":"10d410ea-6d34-4675-b042-2ce9ee57a9bc","Type":"ContainerStarted","Data":"d99701d72d2bcef572ea355837593e642e7b1cd11df6e64442038d0272aac83a"} Feb 23 19:13:02 crc kubenswrapper[4768]: I0223 19:13:02.174609 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bxrl" event={"ID":"10d410ea-6d34-4675-b042-2ce9ee57a9bc","Type":"ContainerStarted","Data":"989e3c884b1c9cec16a71ca07b38f7fc875e7363e87fe0b9f2aca4fe081f296e"} Feb 23 19:13:03 crc kubenswrapper[4768]: I0223 19:13:03.188570 4768 generic.go:334] "Generic (PLEG): container finished" podID="10d410ea-6d34-4675-b042-2ce9ee57a9bc" containerID="989e3c884b1c9cec16a71ca07b38f7fc875e7363e87fe0b9f2aca4fe081f296e" exitCode=0 Feb 23 19:13:03 crc kubenswrapper[4768]: I0223 19:13:03.188626 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bxrl" event={"ID":"10d410ea-6d34-4675-b042-2ce9ee57a9bc","Type":"ContainerDied","Data":"989e3c884b1c9cec16a71ca07b38f7fc875e7363e87fe0b9f2aca4fe081f296e"} Feb 23 19:13:04 crc kubenswrapper[4768]: I0223 19:13:04.203728 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bxrl" event={"ID":"10d410ea-6d34-4675-b042-2ce9ee57a9bc","Type":"ContainerStarted","Data":"f179861d0e0a4e6699cafb163ebb2c2148c20d9dac03b4a84f67b67f1b05eff0"} Feb 23 19:13:04 crc kubenswrapper[4768]: I0223 19:13:04.232276 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7bxrl" podStartSLOduration=2.753080781 podStartE2EDuration="5.232258409s" podCreationTimestamp="2026-02-23 19:12:59 +0000 UTC" firstStartedPulling="2026-02-23 19:13:01.16369922 +0000 UTC m=+2376.554185060" lastFinishedPulling="2026-02-23 19:13:03.642876888 +0000 UTC m=+2379.033362688" observedRunningTime="2026-02-23 19:13:04.229917226 +0000 UTC m=+2379.620403066" watchObservedRunningTime="2026-02-23 19:13:04.232258409 +0000 UTC m=+2379.622744209" Feb 23 19:13:07 crc kubenswrapper[4768]: I0223 19:13:07.309479 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:13:07 crc kubenswrapper[4768]: E0223 19:13:07.310365 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:13:09 crc kubenswrapper[4768]: I0223 19:13:09.995762 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:13:09 crc kubenswrapper[4768]: I0223 19:13:09.996168 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:13:10 crc kubenswrapper[4768]: I0223 19:13:10.043168 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:13:10 crc kubenswrapper[4768]: I0223 19:13:10.319196 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:13:10 crc kubenswrapper[4768]: I0223 19:13:10.363938 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7bxrl"] Feb 23 19:13:12 crc kubenswrapper[4768]: I0223 19:13:12.292158 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7bxrl" podUID="10d410ea-6d34-4675-b042-2ce9ee57a9bc" containerName="registry-server" containerID="cri-o://f179861d0e0a4e6699cafb163ebb2c2148c20d9dac03b4a84f67b67f1b05eff0" gracePeriod=2 Feb 23 19:13:12 crc kubenswrapper[4768]: I0223 19:13:12.768362 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:13:12 crc kubenswrapper[4768]: I0223 19:13:12.857224 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10d410ea-6d34-4675-b042-2ce9ee57a9bc-catalog-content\") pod \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\" (UID: \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\") " Feb 23 19:13:12 crc kubenswrapper[4768]: I0223 19:13:12.857337 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10d410ea-6d34-4675-b042-2ce9ee57a9bc-utilities\") pod \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\" (UID: \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\") " Feb 23 19:13:12 crc kubenswrapper[4768]: I0223 19:13:12.857539 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj5ls\" (UniqueName: \"kubernetes.io/projected/10d410ea-6d34-4675-b042-2ce9ee57a9bc-kube-api-access-sj5ls\") pod \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\" (UID: \"10d410ea-6d34-4675-b042-2ce9ee57a9bc\") " Feb 23 19:13:12 crc kubenswrapper[4768]: I0223 19:13:12.858311 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10d410ea-6d34-4675-b042-2ce9ee57a9bc-utilities" (OuterVolumeSpecName: "utilities") pod "10d410ea-6d34-4675-b042-2ce9ee57a9bc" (UID: "10d410ea-6d34-4675-b042-2ce9ee57a9bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:13:12 crc kubenswrapper[4768]: I0223 19:13:12.872720 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10d410ea-6d34-4675-b042-2ce9ee57a9bc-kube-api-access-sj5ls" (OuterVolumeSpecName: "kube-api-access-sj5ls") pod "10d410ea-6d34-4675-b042-2ce9ee57a9bc" (UID: "10d410ea-6d34-4675-b042-2ce9ee57a9bc"). InnerVolumeSpecName "kube-api-access-sj5ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:13:12 crc kubenswrapper[4768]: I0223 19:13:12.960503 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj5ls\" (UniqueName: \"kubernetes.io/projected/10d410ea-6d34-4675-b042-2ce9ee57a9bc-kube-api-access-sj5ls\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:12 crc kubenswrapper[4768]: I0223 19:13:12.960570 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10d410ea-6d34-4675-b042-2ce9ee57a9bc-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.221132 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10d410ea-6d34-4675-b042-2ce9ee57a9bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10d410ea-6d34-4675-b042-2ce9ee57a9bc" (UID: "10d410ea-6d34-4675-b042-2ce9ee57a9bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.268539 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10d410ea-6d34-4675-b042-2ce9ee57a9bc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.309716 4768 generic.go:334] "Generic (PLEG): container finished" podID="10d410ea-6d34-4675-b042-2ce9ee57a9bc" containerID="f179861d0e0a4e6699cafb163ebb2c2148c20d9dac03b4a84f67b67f1b05eff0" exitCode=0 Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.309800 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7bxrl" Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.327505 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bxrl" event={"ID":"10d410ea-6d34-4675-b042-2ce9ee57a9bc","Type":"ContainerDied","Data":"f179861d0e0a4e6699cafb163ebb2c2148c20d9dac03b4a84f67b67f1b05eff0"} Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.327545 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7bxrl" event={"ID":"10d410ea-6d34-4675-b042-2ce9ee57a9bc","Type":"ContainerDied","Data":"d99701d72d2bcef572ea355837593e642e7b1cd11df6e64442038d0272aac83a"} Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.327567 4768 scope.go:117] "RemoveContainer" containerID="f179861d0e0a4e6699cafb163ebb2c2148c20d9dac03b4a84f67b67f1b05eff0" Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.372797 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7bxrl"] Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.374549 4768 scope.go:117] "RemoveContainer" containerID="989e3c884b1c9cec16a71ca07b38f7fc875e7363e87fe0b9f2aca4fe081f296e" Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.390088 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7bxrl"] Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.405292 4768 scope.go:117] "RemoveContainer" containerID="bc74babfe8fd2ecc6765654b0d92c23f85bb119ccff2383470110932d72ffc6d" Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.456745 4768 scope.go:117] "RemoveContainer" containerID="f179861d0e0a4e6699cafb163ebb2c2148c20d9dac03b4a84f67b67f1b05eff0" Feb 23 19:13:13 crc kubenswrapper[4768]: E0223 19:13:13.457234 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f179861d0e0a4e6699cafb163ebb2c2148c20d9dac03b4a84f67b67f1b05eff0\": container with ID starting with f179861d0e0a4e6699cafb163ebb2c2148c20d9dac03b4a84f67b67f1b05eff0 not found: ID does not exist" containerID="f179861d0e0a4e6699cafb163ebb2c2148c20d9dac03b4a84f67b67f1b05eff0" Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.457304 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f179861d0e0a4e6699cafb163ebb2c2148c20d9dac03b4a84f67b67f1b05eff0"} err="failed to get container status \"f179861d0e0a4e6699cafb163ebb2c2148c20d9dac03b4a84f67b67f1b05eff0\": rpc error: code = NotFound desc = could not find container \"f179861d0e0a4e6699cafb163ebb2c2148c20d9dac03b4a84f67b67f1b05eff0\": container with ID starting with f179861d0e0a4e6699cafb163ebb2c2148c20d9dac03b4a84f67b67f1b05eff0 not found: ID does not exist" Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.457337 4768 scope.go:117] "RemoveContainer" containerID="989e3c884b1c9cec16a71ca07b38f7fc875e7363e87fe0b9f2aca4fe081f296e" Feb 23 19:13:13 crc kubenswrapper[4768]: E0223 19:13:13.457669 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"989e3c884b1c9cec16a71ca07b38f7fc875e7363e87fe0b9f2aca4fe081f296e\": container with ID starting with 989e3c884b1c9cec16a71ca07b38f7fc875e7363e87fe0b9f2aca4fe081f296e not found: ID does not exist" containerID="989e3c884b1c9cec16a71ca07b38f7fc875e7363e87fe0b9f2aca4fe081f296e" Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.457705 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"989e3c884b1c9cec16a71ca07b38f7fc875e7363e87fe0b9f2aca4fe081f296e"} err="failed to get container status \"989e3c884b1c9cec16a71ca07b38f7fc875e7363e87fe0b9f2aca4fe081f296e\": rpc error: code = NotFound desc = could not find container \"989e3c884b1c9cec16a71ca07b38f7fc875e7363e87fe0b9f2aca4fe081f296e\": container with ID starting with 989e3c884b1c9cec16a71ca07b38f7fc875e7363e87fe0b9f2aca4fe081f296e not found: ID does not exist" Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.457728 4768 scope.go:117] "RemoveContainer" containerID="bc74babfe8fd2ecc6765654b0d92c23f85bb119ccff2383470110932d72ffc6d" Feb 23 19:13:13 crc kubenswrapper[4768]: E0223 19:13:13.457989 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc74babfe8fd2ecc6765654b0d92c23f85bb119ccff2383470110932d72ffc6d\": container with ID starting with bc74babfe8fd2ecc6765654b0d92c23f85bb119ccff2383470110932d72ffc6d not found: ID does not exist" containerID="bc74babfe8fd2ecc6765654b0d92c23f85bb119ccff2383470110932d72ffc6d" Feb 23 19:13:13 crc kubenswrapper[4768]: I0223 19:13:13.458026 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc74babfe8fd2ecc6765654b0d92c23f85bb119ccff2383470110932d72ffc6d"} err="failed to get container status \"bc74babfe8fd2ecc6765654b0d92c23f85bb119ccff2383470110932d72ffc6d\": rpc error: code = NotFound desc = could not find container \"bc74babfe8fd2ecc6765654b0d92c23f85bb119ccff2383470110932d72ffc6d\": container with ID starting with bc74babfe8fd2ecc6765654b0d92c23f85bb119ccff2383470110932d72ffc6d not found: ID does not exist" Feb 23 19:13:15 crc kubenswrapper[4768]: I0223 19:13:15.327756 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10d410ea-6d34-4675-b042-2ce9ee57a9bc" path="/var/lib/kubelet/pods/10d410ea-6d34-4675-b042-2ce9ee57a9bc/volumes" Feb 23 19:13:19 crc kubenswrapper[4768]: I0223 19:13:19.307773 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:13:19 crc kubenswrapper[4768]: E0223 19:13:19.308661 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:13:34 crc kubenswrapper[4768]: I0223 19:13:34.307936 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:13:34 crc kubenswrapper[4768]: E0223 19:13:34.308738 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:13:46 crc kubenswrapper[4768]: I0223 19:13:46.308158 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:13:46 crc kubenswrapper[4768]: E0223 19:13:46.308931 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:13:49 crc kubenswrapper[4768]: I0223 19:13:49.687506 4768 generic.go:334] "Generic (PLEG): container finished" podID="4a3528f8-0776-47bf-81fa-c7bd1698938b" containerID="da1a2f7ec09c10e4e9a45faba397257d86cd931e9a610c99443135a220fc0a5d" exitCode=0 Feb 23 19:13:49 crc kubenswrapper[4768]: I0223 19:13:49.687864 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" event={"ID":"4a3528f8-0776-47bf-81fa-c7bd1698938b","Type":"ContainerDied","Data":"da1a2f7ec09c10e4e9a45faba397257d86cd931e9a610c99443135a220fc0a5d"} Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.172668 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.276048 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-3\") pod \"4a3528f8-0776-47bf-81fa-c7bd1698938b\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.276594 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-1\") pod \"4a3528f8-0776-47bf-81fa-c7bd1698938b\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.276767 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-migration-ssh-key-0\") pod \"4a3528f8-0776-47bf-81fa-c7bd1698938b\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.276841 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2q6n5\" (UniqueName: \"kubernetes.io/projected/4a3528f8-0776-47bf-81fa-c7bd1698938b-kube-api-access-2q6n5\") pod \"4a3528f8-0776-47bf-81fa-c7bd1698938b\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.276877 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-migration-ssh-key-1\") pod \"4a3528f8-0776-47bf-81fa-c7bd1698938b\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.276956 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-inventory\") pod \"4a3528f8-0776-47bf-81fa-c7bd1698938b\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.277006 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-2\") pod \"4a3528f8-0776-47bf-81fa-c7bd1698938b\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.277103 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-0\") pod \"4a3528f8-0776-47bf-81fa-c7bd1698938b\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.277178 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-ssh-key-openstack-edpm-ipam\") pod \"4a3528f8-0776-47bf-81fa-c7bd1698938b\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.277285 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-combined-ca-bundle\") pod \"4a3528f8-0776-47bf-81fa-c7bd1698938b\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.277448 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-extra-config-0\") pod \"4a3528f8-0776-47bf-81fa-c7bd1698938b\" (UID: \"4a3528f8-0776-47bf-81fa-c7bd1698938b\") " Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.294646 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a3528f8-0776-47bf-81fa-c7bd1698938b-kube-api-access-2q6n5" (OuterVolumeSpecName: "kube-api-access-2q6n5") pod "4a3528f8-0776-47bf-81fa-c7bd1698938b" (UID: "4a3528f8-0776-47bf-81fa-c7bd1698938b"). InnerVolumeSpecName "kube-api-access-2q6n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.298076 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "4a3528f8-0776-47bf-81fa-c7bd1698938b" (UID: "4a3528f8-0776-47bf-81fa-c7bd1698938b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.314121 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "4a3528f8-0776-47bf-81fa-c7bd1698938b" (UID: "4a3528f8-0776-47bf-81fa-c7bd1698938b"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.319377 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "4a3528f8-0776-47bf-81fa-c7bd1698938b" (UID: "4a3528f8-0776-47bf-81fa-c7bd1698938b"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.322502 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "4a3528f8-0776-47bf-81fa-c7bd1698938b" (UID: "4a3528f8-0776-47bf-81fa-c7bd1698938b"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.327331 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "4a3528f8-0776-47bf-81fa-c7bd1698938b" (UID: "4a3528f8-0776-47bf-81fa-c7bd1698938b"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.328215 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "4a3528f8-0776-47bf-81fa-c7bd1698938b" (UID: "4a3528f8-0776-47bf-81fa-c7bd1698938b"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.328436 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "4a3528f8-0776-47bf-81fa-c7bd1698938b" (UID: "4a3528f8-0776-47bf-81fa-c7bd1698938b"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.333155 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-inventory" (OuterVolumeSpecName: "inventory") pod "4a3528f8-0776-47bf-81fa-c7bd1698938b" (UID: "4a3528f8-0776-47bf-81fa-c7bd1698938b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.335489 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "4a3528f8-0776-47bf-81fa-c7bd1698938b" (UID: "4a3528f8-0776-47bf-81fa-c7bd1698938b"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.335578 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4a3528f8-0776-47bf-81fa-c7bd1698938b" (UID: "4a3528f8-0776-47bf-81fa-c7bd1698938b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.383198 4768 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.383261 4768 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.383276 4768 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.383288 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2q6n5\" (UniqueName: \"kubernetes.io/projected/4a3528f8-0776-47bf-81fa-c7bd1698938b-kube-api-access-2q6n5\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.383299 4768 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.383311 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.383476 4768 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.383613 4768 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.383634 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.383643 4768 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.383652 4768 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4a3528f8-0776-47bf-81fa-c7bd1698938b-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.713205 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" event={"ID":"4a3528f8-0776-47bf-81fa-c7bd1698938b","Type":"ContainerDied","Data":"e48fd12d9624a617c29f2086411df63aa988dc0ee58037a69391932e49e4dff6"} Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.713270 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e48fd12d9624a617c29f2086411df63aa988dc0ee58037a69391932e49e4dff6" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.713319 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-f79cf" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.951825 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x"] Feb 23 19:13:51 crc kubenswrapper[4768]: E0223 19:13:51.952503 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d410ea-6d34-4675-b042-2ce9ee57a9bc" containerName="registry-server" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.952535 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d410ea-6d34-4675-b042-2ce9ee57a9bc" containerName="registry-server" Feb 23 19:13:51 crc kubenswrapper[4768]: E0223 19:13:51.952578 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d410ea-6d34-4675-b042-2ce9ee57a9bc" containerName="extract-content" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.952592 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d410ea-6d34-4675-b042-2ce9ee57a9bc" containerName="extract-content" Feb 23 19:13:51 crc kubenswrapper[4768]: E0223 19:13:51.952627 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d410ea-6d34-4675-b042-2ce9ee57a9bc" containerName="extract-utilities" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.952641 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d410ea-6d34-4675-b042-2ce9ee57a9bc" containerName="extract-utilities" Feb 23 19:13:51 crc kubenswrapper[4768]: E0223 19:13:51.952669 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3528f8-0776-47bf-81fa-c7bd1698938b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.952685 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3528f8-0776-47bf-81fa-c7bd1698938b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.953114 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="10d410ea-6d34-4675-b042-2ce9ee57a9bc" containerName="registry-server" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.953159 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3528f8-0776-47bf-81fa-c7bd1698938b" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.954214 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.958834 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.958986 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.959278 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.962572 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.963358 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-7fkmg" Feb 23 19:13:51 crc kubenswrapper[4768]: I0223 19:13:51.971302 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x"] Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.011565 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.011641 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kq9c\" (UniqueName: \"kubernetes.io/projected/2393d837-c9f2-4896-ab3e-32924e48359a-kube-api-access-9kq9c\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.011677 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.011751 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.011791 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.011834 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.012069 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.113835 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.113935 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.114001 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.114127 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.114172 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.114234 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kq9c\" (UniqueName: \"kubernetes.io/projected/2393d837-c9f2-4896-ab3e-32924e48359a-kube-api-access-9kq9c\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.114295 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.117849 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.118133 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.118554 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.119268 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.123964 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.124140 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.133539 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kq9c\" (UniqueName: \"kubernetes.io/projected/2393d837-c9f2-4896-ab3e-32924e48359a-kube-api-access-9kq9c\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.329998 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:13:52 crc kubenswrapper[4768]: I0223 19:13:52.917401 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x"] Feb 23 19:13:52 crc kubenswrapper[4768]: W0223 19:13:52.923101 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2393d837_c9f2_4896_ab3e_32924e48359a.slice/crio-7d61d4b88c0dbcb073e25e031d6d18d0f9ff1531ab5392fdaa52cc38c87fd256 WatchSource:0}: Error finding container 7d61d4b88c0dbcb073e25e031d6d18d0f9ff1531ab5392fdaa52cc38c87fd256: Status 404 returned error can't find the container with id 7d61d4b88c0dbcb073e25e031d6d18d0f9ff1531ab5392fdaa52cc38c87fd256 Feb 23 19:13:53 crc kubenswrapper[4768]: I0223 19:13:53.738912 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" event={"ID":"2393d837-c9f2-4896-ab3e-32924e48359a","Type":"ContainerStarted","Data":"7d61d4b88c0dbcb073e25e031d6d18d0f9ff1531ab5392fdaa52cc38c87fd256"} Feb 23 19:13:54 crc kubenswrapper[4768]: I0223 19:13:54.755346 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" event={"ID":"2393d837-c9f2-4896-ab3e-32924e48359a","Type":"ContainerStarted","Data":"83ad50a10fae77b8930615d3fb0890c5c22d6fab925bf34e7c351e69a3a46710"} Feb 23 19:13:54 crc kubenswrapper[4768]: I0223 19:13:54.800634 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" podStartSLOduration=3.084178191 podStartE2EDuration="3.80060776s" podCreationTimestamp="2026-02-23 19:13:51 +0000 UTC" firstStartedPulling="2026-02-23 19:13:52.925914854 +0000 UTC m=+2428.316400654" lastFinishedPulling="2026-02-23 19:13:53.642344423 +0000 UTC m=+2429.032830223" observedRunningTime="2026-02-23 19:13:54.779974428 +0000 UTC m=+2430.170460218" watchObservedRunningTime="2026-02-23 19:13:54.80060776 +0000 UTC m=+2430.191093570" Feb 23 19:13:57 crc kubenswrapper[4768]: I0223 19:13:57.308152 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:13:57 crc kubenswrapper[4768]: E0223 19:13:57.309587 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:14:10 crc kubenswrapper[4768]: I0223 19:14:10.308454 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:14:10 crc kubenswrapper[4768]: E0223 19:14:10.309327 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:14:21 crc kubenswrapper[4768]: I0223 19:14:21.308334 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:14:21 crc kubenswrapper[4768]: E0223 19:14:21.309190 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:14:36 crc kubenswrapper[4768]: I0223 19:14:36.307791 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:14:36 crc kubenswrapper[4768]: E0223 19:14:36.308695 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:14:49 crc kubenswrapper[4768]: I0223 19:14:49.308215 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:14:49 crc kubenswrapper[4768]: E0223 19:14:49.309338 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.165438 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm"] Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.168135 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.170937 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.170982 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.186660 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm"] Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.212643 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a462ded4-a757-4d27-a2e2-58ae957ff3b6-secret-volume\") pod \"collect-profiles-29531235-dljfm\" (UID: \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.212830 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ttft\" (UniqueName: \"kubernetes.io/projected/a462ded4-a757-4d27-a2e2-58ae957ff3b6-kube-api-access-8ttft\") pod \"collect-profiles-29531235-dljfm\" (UID: \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.212912 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a462ded4-a757-4d27-a2e2-58ae957ff3b6-config-volume\") pod \"collect-profiles-29531235-dljfm\" (UID: \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.318127 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ttft\" (UniqueName: \"kubernetes.io/projected/a462ded4-a757-4d27-a2e2-58ae957ff3b6-kube-api-access-8ttft\") pod \"collect-profiles-29531235-dljfm\" (UID: \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.318279 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a462ded4-a757-4d27-a2e2-58ae957ff3b6-config-volume\") pod \"collect-profiles-29531235-dljfm\" (UID: \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.318350 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a462ded4-a757-4d27-a2e2-58ae957ff3b6-secret-volume\") pod \"collect-profiles-29531235-dljfm\" (UID: \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.322184 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a462ded4-a757-4d27-a2e2-58ae957ff3b6-config-volume\") pod \"collect-profiles-29531235-dljfm\" (UID: \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.331941 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a462ded4-a757-4d27-a2e2-58ae957ff3b6-secret-volume\") pod \"collect-profiles-29531235-dljfm\" (UID: \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.352941 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ttft\" (UniqueName: \"kubernetes.io/projected/a462ded4-a757-4d27-a2e2-58ae957ff3b6-kube-api-access-8ttft\") pod \"collect-profiles-29531235-dljfm\" (UID: \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.492184 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" Feb 23 19:15:00 crc kubenswrapper[4768]: I0223 19:15:00.974216 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm"] Feb 23 19:15:01 crc kubenswrapper[4768]: I0223 19:15:01.518912 4768 generic.go:334] "Generic (PLEG): container finished" podID="a462ded4-a757-4d27-a2e2-58ae957ff3b6" containerID="c266da4fa8b3d41d23c39d1d912f352910e92cd7fa30b8918e38a46e30e6a4d4" exitCode=0 Feb 23 19:15:01 crc kubenswrapper[4768]: I0223 19:15:01.518997 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" event={"ID":"a462ded4-a757-4d27-a2e2-58ae957ff3b6","Type":"ContainerDied","Data":"c266da4fa8b3d41d23c39d1d912f352910e92cd7fa30b8918e38a46e30e6a4d4"} Feb 23 19:15:01 crc kubenswrapper[4768]: I0223 19:15:01.519280 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" event={"ID":"a462ded4-a757-4d27-a2e2-58ae957ff3b6","Type":"ContainerStarted","Data":"def28288d98e84927da0b8f6a7b0f7209afc8d3018065ef8a8b1b9a08c5beb42"} Feb 23 19:15:02 crc kubenswrapper[4768]: I0223 19:15:02.845736 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" Feb 23 19:15:02 crc kubenswrapper[4768]: I0223 19:15:02.974988 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ttft\" (UniqueName: \"kubernetes.io/projected/a462ded4-a757-4d27-a2e2-58ae957ff3b6-kube-api-access-8ttft\") pod \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\" (UID: \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\") " Feb 23 19:15:02 crc kubenswrapper[4768]: I0223 19:15:02.975333 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a462ded4-a757-4d27-a2e2-58ae957ff3b6-secret-volume\") pod \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\" (UID: \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\") " Feb 23 19:15:02 crc kubenswrapper[4768]: I0223 19:15:02.975434 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a462ded4-a757-4d27-a2e2-58ae957ff3b6-config-volume\") pod \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\" (UID: \"a462ded4-a757-4d27-a2e2-58ae957ff3b6\") " Feb 23 19:15:02 crc kubenswrapper[4768]: I0223 19:15:02.976340 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a462ded4-a757-4d27-a2e2-58ae957ff3b6-config-volume" (OuterVolumeSpecName: "config-volume") pod "a462ded4-a757-4d27-a2e2-58ae957ff3b6" (UID: "a462ded4-a757-4d27-a2e2-58ae957ff3b6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 19:15:02 crc kubenswrapper[4768]: I0223 19:15:02.982513 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a462ded4-a757-4d27-a2e2-58ae957ff3b6-kube-api-access-8ttft" (OuterVolumeSpecName: "kube-api-access-8ttft") pod "a462ded4-a757-4d27-a2e2-58ae957ff3b6" (UID: "a462ded4-a757-4d27-a2e2-58ae957ff3b6"). InnerVolumeSpecName "kube-api-access-8ttft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:15:02 crc kubenswrapper[4768]: I0223 19:15:02.983178 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a462ded4-a757-4d27-a2e2-58ae957ff3b6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a462ded4-a757-4d27-a2e2-58ae957ff3b6" (UID: "a462ded4-a757-4d27-a2e2-58ae957ff3b6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:15:03 crc kubenswrapper[4768]: I0223 19:15:03.078854 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ttft\" (UniqueName: \"kubernetes.io/projected/a462ded4-a757-4d27-a2e2-58ae957ff3b6-kube-api-access-8ttft\") on node \"crc\" DevicePath \"\"" Feb 23 19:15:03 crc kubenswrapper[4768]: I0223 19:15:03.078909 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a462ded4-a757-4d27-a2e2-58ae957ff3b6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 19:15:03 crc kubenswrapper[4768]: I0223 19:15:03.078929 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a462ded4-a757-4d27-a2e2-58ae957ff3b6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 19:15:03 crc kubenswrapper[4768]: I0223 19:15:03.544036 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" event={"ID":"a462ded4-a757-4d27-a2e2-58ae957ff3b6","Type":"ContainerDied","Data":"def28288d98e84927da0b8f6a7b0f7209afc8d3018065ef8a8b1b9a08c5beb42"} Feb 23 19:15:03 crc kubenswrapper[4768]: I0223 19:15:03.544099 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="def28288d98e84927da0b8f6a7b0f7209afc8d3018065ef8a8b1b9a08c5beb42" Feb 23 19:15:03 crc kubenswrapper[4768]: I0223 19:15:03.544107 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531235-dljfm" Feb 23 19:15:03 crc kubenswrapper[4768]: I0223 19:15:03.970821 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns"] Feb 23 19:15:03 crc kubenswrapper[4768]: I0223 19:15:03.981137 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531190-g7gns"] Feb 23 19:15:04 crc kubenswrapper[4768]: I0223 19:15:04.308345 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:15:04 crc kubenswrapper[4768]: E0223 19:15:04.308579 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:15:05 crc kubenswrapper[4768]: I0223 19:15:05.341148 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbed104c-291d-45f5-b41d-99814829422e" path="/var/lib/kubelet/pods/dbed104c-291d-45f5-b41d-99814829422e/volumes" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.628699 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kvfg6"] Feb 23 19:15:12 crc kubenswrapper[4768]: E0223 19:15:12.630452 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a462ded4-a757-4d27-a2e2-58ae957ff3b6" containerName="collect-profiles" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.630471 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="a462ded4-a757-4d27-a2e2-58ae957ff3b6" containerName="collect-profiles" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.630744 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="a462ded4-a757-4d27-a2e2-58ae957ff3b6" containerName="collect-profiles" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.633642 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.651833 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kvfg6"] Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.803025 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-catalog-content\") pod \"certified-operators-kvfg6\" (UID: \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\") " pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.803174 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws2n5\" (UniqueName: \"kubernetes.io/projected/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-kube-api-access-ws2n5\") pod \"certified-operators-kvfg6\" (UID: \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\") " pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.803377 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-utilities\") pod \"certified-operators-kvfg6\" (UID: \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\") " pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.905697 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-catalog-content\") pod \"certified-operators-kvfg6\" (UID: \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\") " pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.906030 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws2n5\" (UniqueName: \"kubernetes.io/projected/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-kube-api-access-ws2n5\") pod \"certified-operators-kvfg6\" (UID: \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\") " pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.906074 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-utilities\") pod \"certified-operators-kvfg6\" (UID: \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\") " pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.906562 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-catalog-content\") pod \"certified-operators-kvfg6\" (UID: \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\") " pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.906771 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-utilities\") pod \"certified-operators-kvfg6\" (UID: \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\") " pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.932750 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws2n5\" (UniqueName: \"kubernetes.io/projected/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-kube-api-access-ws2n5\") pod \"certified-operators-kvfg6\" (UID: \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\") " pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:12 crc kubenswrapper[4768]: I0223 19:15:12.973359 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:13 crc kubenswrapper[4768]: I0223 19:15:13.441657 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kvfg6"] Feb 23 19:15:13 crc kubenswrapper[4768]: I0223 19:15:13.645662 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvfg6" event={"ID":"02f8c996-8a3a-44c7-8d75-55c3ae31ed91","Type":"ContainerStarted","Data":"5065e3ab1ef5a038343753c38e09dc899fddc559c7d916e5a6386da4959a609f"} Feb 23 19:15:14 crc kubenswrapper[4768]: I0223 19:15:14.662736 4768 generic.go:334] "Generic (PLEG): container finished" podID="02f8c996-8a3a-44c7-8d75-55c3ae31ed91" containerID="c1e26f3e4a5b68ba75ee63a5c1d4c9650b969dbb966fb3e6d595caefd20fb8ce" exitCode=0 Feb 23 19:15:14 crc kubenswrapper[4768]: I0223 19:15:14.662787 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvfg6" event={"ID":"02f8c996-8a3a-44c7-8d75-55c3ae31ed91","Type":"ContainerDied","Data":"c1e26f3e4a5b68ba75ee63a5c1d4c9650b969dbb966fb3e6d595caefd20fb8ce"} Feb 23 19:15:14 crc kubenswrapper[4768]: I0223 19:15:14.666570 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 19:15:15 crc kubenswrapper[4768]: I0223 19:15:15.674717 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvfg6" event={"ID":"02f8c996-8a3a-44c7-8d75-55c3ae31ed91","Type":"ContainerStarted","Data":"f3413fb784fa4386859d936b0fed0378486ff4d4e02ae9bba8eda2ff2a131903"} Feb 23 19:15:16 crc kubenswrapper[4768]: I0223 19:15:16.688206 4768 generic.go:334] "Generic (PLEG): container finished" podID="02f8c996-8a3a-44c7-8d75-55c3ae31ed91" containerID="f3413fb784fa4386859d936b0fed0378486ff4d4e02ae9bba8eda2ff2a131903" exitCode=0 Feb 23 19:15:16 crc kubenswrapper[4768]: I0223 19:15:16.688279 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvfg6" event={"ID":"02f8c996-8a3a-44c7-8d75-55c3ae31ed91","Type":"ContainerDied","Data":"f3413fb784fa4386859d936b0fed0378486ff4d4e02ae9bba8eda2ff2a131903"} Feb 23 19:15:17 crc kubenswrapper[4768]: I0223 19:15:17.744983 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvfg6" event={"ID":"02f8c996-8a3a-44c7-8d75-55c3ae31ed91","Type":"ContainerStarted","Data":"f1e4628ae34d67f2f58894dfe079c5764189e33c9d336d3e189174cbaa6f451e"} Feb 23 19:15:17 crc kubenswrapper[4768]: I0223 19:15:17.772547 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kvfg6" podStartSLOduration=3.356320525 podStartE2EDuration="5.772522319s" podCreationTimestamp="2026-02-23 19:15:12 +0000 UTC" firstStartedPulling="2026-02-23 19:15:14.66577034 +0000 UTC m=+2510.056256180" lastFinishedPulling="2026-02-23 19:15:17.081972174 +0000 UTC m=+2512.472457974" observedRunningTime="2026-02-23 19:15:17.768371585 +0000 UTC m=+2513.158857425" watchObservedRunningTime="2026-02-23 19:15:17.772522319 +0000 UTC m=+2513.163008119" Feb 23 19:15:19 crc kubenswrapper[4768]: I0223 19:15:19.307361 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:15:19 crc kubenswrapper[4768]: E0223 19:15:19.309131 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:15:22 crc kubenswrapper[4768]: I0223 19:15:22.974137 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:22 crc kubenswrapper[4768]: I0223 19:15:22.974521 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:23 crc kubenswrapper[4768]: I0223 19:15:23.024724 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:23 crc kubenswrapper[4768]: I0223 19:15:23.857082 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:23 crc kubenswrapper[4768]: I0223 19:15:23.925852 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kvfg6"] Feb 23 19:15:25 crc kubenswrapper[4768]: I0223 19:15:25.831543 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kvfg6" podUID="02f8c996-8a3a-44c7-8d75-55c3ae31ed91" containerName="registry-server" containerID="cri-o://f1e4628ae34d67f2f58894dfe079c5764189e33c9d336d3e189174cbaa6f451e" gracePeriod=2 Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.291476 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.440691 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws2n5\" (UniqueName: \"kubernetes.io/projected/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-kube-api-access-ws2n5\") pod \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\" (UID: \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\") " Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.441002 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-catalog-content\") pod \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\" (UID: \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\") " Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.441049 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-utilities\") pod \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\" (UID: \"02f8c996-8a3a-44c7-8d75-55c3ae31ed91\") " Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.442017 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-utilities" (OuterVolumeSpecName: "utilities") pod "02f8c996-8a3a-44c7-8d75-55c3ae31ed91" (UID: "02f8c996-8a3a-44c7-8d75-55c3ae31ed91"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.447818 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-kube-api-access-ws2n5" (OuterVolumeSpecName: "kube-api-access-ws2n5") pod "02f8c996-8a3a-44c7-8d75-55c3ae31ed91" (UID: "02f8c996-8a3a-44c7-8d75-55c3ae31ed91"). InnerVolumeSpecName "kube-api-access-ws2n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.543411 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.543446 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws2n5\" (UniqueName: \"kubernetes.io/projected/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-kube-api-access-ws2n5\") on node \"crc\" DevicePath \"\"" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.662795 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02f8c996-8a3a-44c7-8d75-55c3ae31ed91" (UID: "02f8c996-8a3a-44c7-8d75-55c3ae31ed91"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.748623 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f8c996-8a3a-44c7-8d75-55c3ae31ed91-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.841682 4768 generic.go:334] "Generic (PLEG): container finished" podID="02f8c996-8a3a-44c7-8d75-55c3ae31ed91" containerID="f1e4628ae34d67f2f58894dfe079c5764189e33c9d336d3e189174cbaa6f451e" exitCode=0 Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.841720 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvfg6" event={"ID":"02f8c996-8a3a-44c7-8d75-55c3ae31ed91","Type":"ContainerDied","Data":"f1e4628ae34d67f2f58894dfe079c5764189e33c9d336d3e189174cbaa6f451e"} Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.841772 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kvfg6" event={"ID":"02f8c996-8a3a-44c7-8d75-55c3ae31ed91","Type":"ContainerDied","Data":"5065e3ab1ef5a038343753c38e09dc899fddc559c7d916e5a6386da4959a609f"} Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.841795 4768 scope.go:117] "RemoveContainer" containerID="f1e4628ae34d67f2f58894dfe079c5764189e33c9d336d3e189174cbaa6f451e" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.841839 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kvfg6" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.867307 4768 scope.go:117] "RemoveContainer" containerID="f3413fb784fa4386859d936b0fed0378486ff4d4e02ae9bba8eda2ff2a131903" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.890548 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kvfg6"] Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.900149 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kvfg6"] Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.904072 4768 scope.go:117] "RemoveContainer" containerID="c1e26f3e4a5b68ba75ee63a5c1d4c9650b969dbb966fb3e6d595caefd20fb8ce" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.945774 4768 scope.go:117] "RemoveContainer" containerID="f1e4628ae34d67f2f58894dfe079c5764189e33c9d336d3e189174cbaa6f451e" Feb 23 19:15:26 crc kubenswrapper[4768]: E0223 19:15:26.946339 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1e4628ae34d67f2f58894dfe079c5764189e33c9d336d3e189174cbaa6f451e\": container with ID starting with f1e4628ae34d67f2f58894dfe079c5764189e33c9d336d3e189174cbaa6f451e not found: ID does not exist" containerID="f1e4628ae34d67f2f58894dfe079c5764189e33c9d336d3e189174cbaa6f451e" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.946371 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e4628ae34d67f2f58894dfe079c5764189e33c9d336d3e189174cbaa6f451e"} err="failed to get container status \"f1e4628ae34d67f2f58894dfe079c5764189e33c9d336d3e189174cbaa6f451e\": rpc error: code = NotFound desc = could not find container \"f1e4628ae34d67f2f58894dfe079c5764189e33c9d336d3e189174cbaa6f451e\": container with ID starting with f1e4628ae34d67f2f58894dfe079c5764189e33c9d336d3e189174cbaa6f451e not found: ID does not exist" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.946391 4768 scope.go:117] "RemoveContainer" containerID="f3413fb784fa4386859d936b0fed0378486ff4d4e02ae9bba8eda2ff2a131903" Feb 23 19:15:26 crc kubenswrapper[4768]: E0223 19:15:26.946604 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3413fb784fa4386859d936b0fed0378486ff4d4e02ae9bba8eda2ff2a131903\": container with ID starting with f3413fb784fa4386859d936b0fed0378486ff4d4e02ae9bba8eda2ff2a131903 not found: ID does not exist" containerID="f3413fb784fa4386859d936b0fed0378486ff4d4e02ae9bba8eda2ff2a131903" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.946623 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3413fb784fa4386859d936b0fed0378486ff4d4e02ae9bba8eda2ff2a131903"} err="failed to get container status \"f3413fb784fa4386859d936b0fed0378486ff4d4e02ae9bba8eda2ff2a131903\": rpc error: code = NotFound desc = could not find container \"f3413fb784fa4386859d936b0fed0378486ff4d4e02ae9bba8eda2ff2a131903\": container with ID starting with f3413fb784fa4386859d936b0fed0378486ff4d4e02ae9bba8eda2ff2a131903 not found: ID does not exist" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.946637 4768 scope.go:117] "RemoveContainer" containerID="c1e26f3e4a5b68ba75ee63a5c1d4c9650b969dbb966fb3e6d595caefd20fb8ce" Feb 23 19:15:26 crc kubenswrapper[4768]: E0223 19:15:26.946985 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1e26f3e4a5b68ba75ee63a5c1d4c9650b969dbb966fb3e6d595caefd20fb8ce\": container with ID starting with c1e26f3e4a5b68ba75ee63a5c1d4c9650b969dbb966fb3e6d595caefd20fb8ce not found: ID does not exist" containerID="c1e26f3e4a5b68ba75ee63a5c1d4c9650b969dbb966fb3e6d595caefd20fb8ce" Feb 23 19:15:26 crc kubenswrapper[4768]: I0223 19:15:26.947010 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1e26f3e4a5b68ba75ee63a5c1d4c9650b969dbb966fb3e6d595caefd20fb8ce"} err="failed to get container status \"c1e26f3e4a5b68ba75ee63a5c1d4c9650b969dbb966fb3e6d595caefd20fb8ce\": rpc error: code = NotFound desc = could not find container \"c1e26f3e4a5b68ba75ee63a5c1d4c9650b969dbb966fb3e6d595caefd20fb8ce\": container with ID starting with c1e26f3e4a5b68ba75ee63a5c1d4c9650b969dbb966fb3e6d595caefd20fb8ce not found: ID does not exist" Feb 23 19:15:27 crc kubenswrapper[4768]: I0223 19:15:27.322433 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f8c996-8a3a-44c7-8d75-55c3ae31ed91" path="/var/lib/kubelet/pods/02f8c996-8a3a-44c7-8d75-55c3ae31ed91/volumes" Feb 23 19:15:32 crc kubenswrapper[4768]: I0223 19:15:32.308065 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:15:32 crc kubenswrapper[4768]: E0223 19:15:32.308889 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:15:41 crc kubenswrapper[4768]: I0223 19:15:41.305047 4768 scope.go:117] "RemoveContainer" containerID="572031507feda3505a8da02af6e84219f377e32eecd91fba14e7ba6e9946f2ef" Feb 23 19:15:47 crc kubenswrapper[4768]: I0223 19:15:47.309446 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:15:47 crc kubenswrapper[4768]: E0223 19:15:47.310837 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:16:00 crc kubenswrapper[4768]: I0223 19:16:00.308156 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:16:00 crc kubenswrapper[4768]: E0223 19:16:00.308952 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:16:09 crc kubenswrapper[4768]: I0223 19:16:09.355205 4768 generic.go:334] "Generic (PLEG): container finished" podID="2393d837-c9f2-4896-ab3e-32924e48359a" containerID="83ad50a10fae77b8930615d3fb0890c5c22d6fab925bf34e7c351e69a3a46710" exitCode=0 Feb 23 19:16:09 crc kubenswrapper[4768]: I0223 19:16:09.355281 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" event={"ID":"2393d837-c9f2-4896-ab3e-32924e48359a","Type":"ContainerDied","Data":"83ad50a10fae77b8930615d3fb0890c5c22d6fab925bf34e7c351e69a3a46710"} Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.779560 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.849773 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kq9c\" (UniqueName: \"kubernetes.io/projected/2393d837-c9f2-4896-ab3e-32924e48359a-kube-api-access-9kq9c\") pod \"2393d837-c9f2-4896-ab3e-32924e48359a\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.849897 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-0\") pod \"2393d837-c9f2-4896-ab3e-32924e48359a\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.849932 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-inventory\") pod \"2393d837-c9f2-4896-ab3e-32924e48359a\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.849973 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-telemetry-combined-ca-bundle\") pod \"2393d837-c9f2-4896-ab3e-32924e48359a\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.850037 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-2\") pod \"2393d837-c9f2-4896-ab3e-32924e48359a\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.850118 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-1\") pod \"2393d837-c9f2-4896-ab3e-32924e48359a\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.850152 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ssh-key-openstack-edpm-ipam\") pod \"2393d837-c9f2-4896-ab3e-32924e48359a\" (UID: \"2393d837-c9f2-4896-ab3e-32924e48359a\") " Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.855922 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2393d837-c9f2-4896-ab3e-32924e48359a-kube-api-access-9kq9c" (OuterVolumeSpecName: "kube-api-access-9kq9c") pod "2393d837-c9f2-4896-ab3e-32924e48359a" (UID: "2393d837-c9f2-4896-ab3e-32924e48359a"). InnerVolumeSpecName "kube-api-access-9kq9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.856422 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2393d837-c9f2-4896-ab3e-32924e48359a" (UID: "2393d837-c9f2-4896-ab3e-32924e48359a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.879356 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2393d837-c9f2-4896-ab3e-32924e48359a" (UID: "2393d837-c9f2-4896-ab3e-32924e48359a"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.881643 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2393d837-c9f2-4896-ab3e-32924e48359a" (UID: "2393d837-c9f2-4896-ab3e-32924e48359a"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.881896 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2393d837-c9f2-4896-ab3e-32924e48359a" (UID: "2393d837-c9f2-4896-ab3e-32924e48359a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.883691 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-inventory" (OuterVolumeSpecName: "inventory") pod "2393d837-c9f2-4896-ab3e-32924e48359a" (UID: "2393d837-c9f2-4896-ab3e-32924e48359a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.884264 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2393d837-c9f2-4896-ab3e-32924e48359a" (UID: "2393d837-c9f2-4896-ab3e-32924e48359a"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.953004 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.953069 4768 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-inventory\") on node \"crc\" DevicePath \"\"" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.953092 4768 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.953113 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.953131 4768 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.953151 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2393d837-c9f2-4896-ab3e-32924e48359a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 23 19:16:10 crc kubenswrapper[4768]: I0223 19:16:10.953169 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kq9c\" (UniqueName: \"kubernetes.io/projected/2393d837-c9f2-4896-ab3e-32924e48359a-kube-api-access-9kq9c\") on node \"crc\" DevicePath \"\"" Feb 23 19:16:11 crc kubenswrapper[4768]: I0223 19:16:11.375608 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" event={"ID":"2393d837-c9f2-4896-ab3e-32924e48359a","Type":"ContainerDied","Data":"7d61d4b88c0dbcb073e25e031d6d18d0f9ff1531ab5392fdaa52cc38c87fd256"} Feb 23 19:16:11 crc kubenswrapper[4768]: I0223 19:16:11.376002 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d61d4b88c0dbcb073e25e031d6d18d0f9ff1531ab5392fdaa52cc38c87fd256" Feb 23 19:16:11 crc kubenswrapper[4768]: I0223 19:16:11.375700 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x" Feb 23 19:16:13 crc kubenswrapper[4768]: I0223 19:16:13.313384 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:16:14 crc kubenswrapper[4768]: I0223 19:16:14.415756 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"cf62da9c1773c95b8b67e32cbd37e0469e907898a6d900c0ab50fb2577dbc0fd"} Feb 23 19:17:04 crc kubenswrapper[4768]: I0223 19:17:04.824898 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 19:17:04 crc kubenswrapper[4768]: E0223 19:17:04.826399 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f8c996-8a3a-44c7-8d75-55c3ae31ed91" containerName="extract-utilities" Feb 23 19:17:04 crc kubenswrapper[4768]: I0223 19:17:04.826426 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f8c996-8a3a-44c7-8d75-55c3ae31ed91" containerName="extract-utilities" Feb 23 19:17:04 crc kubenswrapper[4768]: E0223 19:17:04.826455 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2393d837-c9f2-4896-ab3e-32924e48359a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 19:17:04 crc kubenswrapper[4768]: I0223 19:17:04.826470 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2393d837-c9f2-4896-ab3e-32924e48359a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 19:17:04 crc kubenswrapper[4768]: E0223 19:17:04.826516 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f8c996-8a3a-44c7-8d75-55c3ae31ed91" containerName="registry-server" Feb 23 19:17:04 crc kubenswrapper[4768]: I0223 19:17:04.826531 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f8c996-8a3a-44c7-8d75-55c3ae31ed91" containerName="registry-server" Feb 23 19:17:04 crc kubenswrapper[4768]: E0223 19:17:04.826559 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f8c996-8a3a-44c7-8d75-55c3ae31ed91" containerName="extract-content" Feb 23 19:17:04 crc kubenswrapper[4768]: I0223 19:17:04.826572 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f8c996-8a3a-44c7-8d75-55c3ae31ed91" containerName="extract-content" Feb 23 19:17:04 crc kubenswrapper[4768]: I0223 19:17:04.826959 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f8c996-8a3a-44c7-8d75-55c3ae31ed91" containerName="registry-server" Feb 23 19:17:04 crc kubenswrapper[4768]: I0223 19:17:04.826993 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2393d837-c9f2-4896-ab3e-32924e48359a" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 23 19:17:04 crc kubenswrapper[4768]: I0223 19:17:04.841481 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 19:17:04 crc kubenswrapper[4768]: I0223 19:17:04.841707 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 19:17:04 crc kubenswrapper[4768]: I0223 19:17:04.851945 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 23 19:17:04 crc kubenswrapper[4768]: I0223 19:17:04.852394 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 23 19:17:04 crc kubenswrapper[4768]: I0223 19:17:04.852552 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-7lcml" Feb 23 19:17:04 crc kubenswrapper[4768]: I0223 19:17:04.852706 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.004662 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/89c93f99-08a8-4231-8b96-d307d0525745-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.004758 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89c93f99-08a8-4231-8b96-d307d0525745-config-data\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.004787 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.004916 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/89c93f99-08a8-4231-8b96-d307d0525745-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.004967 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/89c93f99-08a8-4231-8b96-d307d0525745-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.005033 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.005061 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.005269 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t528h\" (UniqueName: \"kubernetes.io/projected/89c93f99-08a8-4231-8b96-d307d0525745-kube-api-access-t528h\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.005326 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.107119 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.107180 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.107275 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t528h\" (UniqueName: \"kubernetes.io/projected/89c93f99-08a8-4231-8b96-d307d0525745-kube-api-access-t528h\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.107316 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.107405 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/89c93f99-08a8-4231-8b96-d307d0525745-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.108891 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89c93f99-08a8-4231-8b96-d307d0525745-config-data\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.108931 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.109054 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/89c93f99-08a8-4231-8b96-d307d0525745-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.109138 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/89c93f99-08a8-4231-8b96-d307d0525745-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.109140 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/89c93f99-08a8-4231-8b96-d307d0525745-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.109505 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.109989 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/89c93f99-08a8-4231-8b96-d307d0525745-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.110105 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/89c93f99-08a8-4231-8b96-d307d0525745-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.110522 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89c93f99-08a8-4231-8b96-d307d0525745-config-data\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.115524 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.115953 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.119497 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.131471 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t528h\" (UniqueName: \"kubernetes.io/projected/89c93f99-08a8-4231-8b96-d307d0525745-kube-api-access-t528h\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.175792 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " pod="openstack/tempest-tests-tempest" Feb 23 19:17:05 crc kubenswrapper[4768]: I0223 19:17:05.476567 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 19:17:06 crc kubenswrapper[4768]: I0223 19:17:06.008406 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 23 19:17:07 crc kubenswrapper[4768]: I0223 19:17:07.003465 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"89c93f99-08a8-4231-8b96-d307d0525745","Type":"ContainerStarted","Data":"453e03a5147dbf933442eb3018522cd704da3f857dfcf38073398f46f81411ea"} Feb 23 19:17:39 crc kubenswrapper[4768]: E0223 19:17:39.334499 4768 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 23 19:17:39 crc kubenswrapper[4768]: E0223 19:17:39.335497 4768 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t528h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(89c93f99-08a8-4231-8b96-d307d0525745): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 23 19:17:39 crc kubenswrapper[4768]: E0223 19:17:39.336698 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="89c93f99-08a8-4231-8b96-d307d0525745" Feb 23 19:17:40 crc kubenswrapper[4768]: E0223 19:17:40.357984 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="89c93f99-08a8-4231-8b96-d307d0525745" Feb 23 19:17:52 crc kubenswrapper[4768]: I0223 19:17:52.718949 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 23 19:17:54 crc kubenswrapper[4768]: I0223 19:17:54.517191 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"89c93f99-08a8-4231-8b96-d307d0525745","Type":"ContainerStarted","Data":"98ea015b77ebe053b1cf8d928aee83aceab81038f809c6731a82cdd160ea8388"} Feb 23 19:17:54 crc kubenswrapper[4768]: I0223 19:17:54.562076 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.850268478 podStartE2EDuration="51.562047143s" podCreationTimestamp="2026-02-23 19:17:03 +0000 UTC" firstStartedPulling="2026-02-23 19:17:06.003558655 +0000 UTC m=+2621.394044455" lastFinishedPulling="2026-02-23 19:17:52.71533728 +0000 UTC m=+2668.105823120" observedRunningTime="2026-02-23 19:17:54.546986544 +0000 UTC m=+2669.937472404" watchObservedRunningTime="2026-02-23 19:17:54.562047143 +0000 UTC m=+2669.952532983" Feb 23 19:18:39 crc kubenswrapper[4768]: I0223 19:18:39.545897 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:18:39 crc kubenswrapper[4768]: I0223 19:18:39.546770 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:19:09 crc kubenswrapper[4768]: I0223 19:19:09.544900 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:19:09 crc kubenswrapper[4768]: I0223 19:19:09.545833 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:19:39 crc kubenswrapper[4768]: I0223 19:19:39.545622 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:19:39 crc kubenswrapper[4768]: I0223 19:19:39.546896 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:19:39 crc kubenswrapper[4768]: I0223 19:19:39.546988 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 19:19:39 crc kubenswrapper[4768]: I0223 19:19:39.548453 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf62da9c1773c95b8b67e32cbd37e0469e907898a6d900c0ab50fb2577dbc0fd"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 19:19:39 crc kubenswrapper[4768]: I0223 19:19:39.548615 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://cf62da9c1773c95b8b67e32cbd37e0469e907898a6d900c0ab50fb2577dbc0fd" gracePeriod=600 Feb 23 19:19:40 crc kubenswrapper[4768]: I0223 19:19:40.704924 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="cf62da9c1773c95b8b67e32cbd37e0469e907898a6d900c0ab50fb2577dbc0fd" exitCode=0 Feb 23 19:19:40 crc kubenswrapper[4768]: I0223 19:19:40.705046 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"cf62da9c1773c95b8b67e32cbd37e0469e907898a6d900c0ab50fb2577dbc0fd"} Feb 23 19:19:40 crc kubenswrapper[4768]: I0223 19:19:40.705585 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491"} Feb 23 19:19:40 crc kubenswrapper[4768]: I0223 19:19:40.705616 4768 scope.go:117] "RemoveContainer" containerID="5985597f7b823fc057524742d49e6866992a5381e30385a3fd2391b1319a65f0" Feb 23 19:21:23 crc kubenswrapper[4768]: I0223 19:21:23.446308 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vtkmn"] Feb 23 19:21:23 crc kubenswrapper[4768]: I0223 19:21:23.453745 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:23 crc kubenswrapper[4768]: I0223 19:21:23.479260 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vtkmn"] Feb 23 19:21:23 crc kubenswrapper[4768]: I0223 19:21:23.591396 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96372ad0-b596-420b-a8bc-b4258526593b-utilities\") pod \"redhat-operators-vtkmn\" (UID: \"96372ad0-b596-420b-a8bc-b4258526593b\") " pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:23 crc kubenswrapper[4768]: I0223 19:21:23.591506 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96372ad0-b596-420b-a8bc-b4258526593b-catalog-content\") pod \"redhat-operators-vtkmn\" (UID: \"96372ad0-b596-420b-a8bc-b4258526593b\") " pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:23 crc kubenswrapper[4768]: I0223 19:21:23.591950 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27rc\" (UniqueName: \"kubernetes.io/projected/96372ad0-b596-420b-a8bc-b4258526593b-kube-api-access-b27rc\") pod \"redhat-operators-vtkmn\" (UID: \"96372ad0-b596-420b-a8bc-b4258526593b\") " pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:23 crc kubenswrapper[4768]: I0223 19:21:23.695236 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96372ad0-b596-420b-a8bc-b4258526593b-utilities\") pod \"redhat-operators-vtkmn\" (UID: \"96372ad0-b596-420b-a8bc-b4258526593b\") " pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:23 crc kubenswrapper[4768]: I0223 19:21:23.695387 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96372ad0-b596-420b-a8bc-b4258526593b-catalog-content\") pod \"redhat-operators-vtkmn\" (UID: \"96372ad0-b596-420b-a8bc-b4258526593b\") " pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:23 crc kubenswrapper[4768]: I0223 19:21:23.695491 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b27rc\" (UniqueName: \"kubernetes.io/projected/96372ad0-b596-420b-a8bc-b4258526593b-kube-api-access-b27rc\") pod \"redhat-operators-vtkmn\" (UID: \"96372ad0-b596-420b-a8bc-b4258526593b\") " pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:23 crc kubenswrapper[4768]: I0223 19:21:23.695869 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96372ad0-b596-420b-a8bc-b4258526593b-utilities\") pod \"redhat-operators-vtkmn\" (UID: \"96372ad0-b596-420b-a8bc-b4258526593b\") " pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:23 crc kubenswrapper[4768]: I0223 19:21:23.696192 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96372ad0-b596-420b-a8bc-b4258526593b-catalog-content\") pod \"redhat-operators-vtkmn\" (UID: \"96372ad0-b596-420b-a8bc-b4258526593b\") " pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:23 crc kubenswrapper[4768]: I0223 19:21:23.716286 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b27rc\" (UniqueName: \"kubernetes.io/projected/96372ad0-b596-420b-a8bc-b4258526593b-kube-api-access-b27rc\") pod \"redhat-operators-vtkmn\" (UID: \"96372ad0-b596-420b-a8bc-b4258526593b\") " pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:23 crc kubenswrapper[4768]: I0223 19:21:23.827007 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:24 crc kubenswrapper[4768]: I0223 19:21:24.352029 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vtkmn"] Feb 23 19:21:24 crc kubenswrapper[4768]: I0223 19:21:24.881322 4768 generic.go:334] "Generic (PLEG): container finished" podID="96372ad0-b596-420b-a8bc-b4258526593b" containerID="07df8cc5d84993b4dcc02b17ea64aecaac5db497b60ca92bea00fd9305e66781" exitCode=0 Feb 23 19:21:24 crc kubenswrapper[4768]: I0223 19:21:24.881431 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vtkmn" event={"ID":"96372ad0-b596-420b-a8bc-b4258526593b","Type":"ContainerDied","Data":"07df8cc5d84993b4dcc02b17ea64aecaac5db497b60ca92bea00fd9305e66781"} Feb 23 19:21:24 crc kubenswrapper[4768]: I0223 19:21:24.881649 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vtkmn" event={"ID":"96372ad0-b596-420b-a8bc-b4258526593b","Type":"ContainerStarted","Data":"dcfe8737ff0d23315f4033f856f6dc26c1fadc4a3d74383e9641d34caa48ae6a"} Feb 23 19:21:24 crc kubenswrapper[4768]: I0223 19:21:24.884489 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 19:21:25 crc kubenswrapper[4768]: I0223 19:21:25.894016 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vtkmn" event={"ID":"96372ad0-b596-420b-a8bc-b4258526593b","Type":"ContainerStarted","Data":"b907bf3f3386cf76e7b5a39b710223a59a324061d8c7416af8e027e44f4d2f63"} Feb 23 19:21:26 crc kubenswrapper[4768]: I0223 19:21:26.908712 4768 generic.go:334] "Generic (PLEG): container finished" podID="96372ad0-b596-420b-a8bc-b4258526593b" containerID="b907bf3f3386cf76e7b5a39b710223a59a324061d8c7416af8e027e44f4d2f63" exitCode=0 Feb 23 19:21:26 crc kubenswrapper[4768]: I0223 19:21:26.908841 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vtkmn" event={"ID":"96372ad0-b596-420b-a8bc-b4258526593b","Type":"ContainerDied","Data":"b907bf3f3386cf76e7b5a39b710223a59a324061d8c7416af8e027e44f4d2f63"} Feb 23 19:21:27 crc kubenswrapper[4768]: I0223 19:21:27.934582 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vtkmn" event={"ID":"96372ad0-b596-420b-a8bc-b4258526593b","Type":"ContainerStarted","Data":"f16673041a7c6e49e89335efeee91bac83fd09f10444f07e0f65b3429781a7b8"} Feb 23 19:21:27 crc kubenswrapper[4768]: I0223 19:21:27.966272 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vtkmn" podStartSLOduration=2.294072346 podStartE2EDuration="4.966222863s" podCreationTimestamp="2026-02-23 19:21:23 +0000 UTC" firstStartedPulling="2026-02-23 19:21:24.883088327 +0000 UTC m=+2880.273574147" lastFinishedPulling="2026-02-23 19:21:27.555238864 +0000 UTC m=+2882.945724664" observedRunningTime="2026-02-23 19:21:27.956645103 +0000 UTC m=+2883.347130913" watchObservedRunningTime="2026-02-23 19:21:27.966222863 +0000 UTC m=+2883.356708663" Feb 23 19:21:30 crc kubenswrapper[4768]: I0223 19:21:30.843929 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6dgf2"] Feb 23 19:21:30 crc kubenswrapper[4768]: I0223 19:21:30.860042 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:30 crc kubenswrapper[4768]: I0223 19:21:30.895074 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dgf2"] Feb 23 19:21:30 crc kubenswrapper[4768]: I0223 19:21:30.945443 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b682bf3-2092-48e5-acb9-8b2c1eef743a-utilities\") pod \"redhat-marketplace-6dgf2\" (UID: \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\") " pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:30 crc kubenswrapper[4768]: I0223 19:21:30.945505 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b682bf3-2092-48e5-acb9-8b2c1eef743a-catalog-content\") pod \"redhat-marketplace-6dgf2\" (UID: \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\") " pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:30 crc kubenswrapper[4768]: I0223 19:21:30.945535 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kjh2\" (UniqueName: \"kubernetes.io/projected/4b682bf3-2092-48e5-acb9-8b2c1eef743a-kube-api-access-7kjh2\") pod \"redhat-marketplace-6dgf2\" (UID: \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\") " pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:31 crc kubenswrapper[4768]: I0223 19:21:31.047389 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b682bf3-2092-48e5-acb9-8b2c1eef743a-catalog-content\") pod \"redhat-marketplace-6dgf2\" (UID: \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\") " pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:31 crc kubenswrapper[4768]: I0223 19:21:31.047733 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kjh2\" (UniqueName: \"kubernetes.io/projected/4b682bf3-2092-48e5-acb9-8b2c1eef743a-kube-api-access-7kjh2\") pod \"redhat-marketplace-6dgf2\" (UID: \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\") " pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:31 crc kubenswrapper[4768]: I0223 19:21:31.047980 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b682bf3-2092-48e5-acb9-8b2c1eef743a-utilities\") pod \"redhat-marketplace-6dgf2\" (UID: \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\") " pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:31 crc kubenswrapper[4768]: I0223 19:21:31.048043 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b682bf3-2092-48e5-acb9-8b2c1eef743a-catalog-content\") pod \"redhat-marketplace-6dgf2\" (UID: \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\") " pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:31 crc kubenswrapper[4768]: I0223 19:21:31.048508 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b682bf3-2092-48e5-acb9-8b2c1eef743a-utilities\") pod \"redhat-marketplace-6dgf2\" (UID: \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\") " pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:31 crc kubenswrapper[4768]: I0223 19:21:31.074420 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kjh2\" (UniqueName: \"kubernetes.io/projected/4b682bf3-2092-48e5-acb9-8b2c1eef743a-kube-api-access-7kjh2\") pod \"redhat-marketplace-6dgf2\" (UID: \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\") " pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:31 crc kubenswrapper[4768]: I0223 19:21:31.204756 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:31 crc kubenswrapper[4768]: I0223 19:21:31.661036 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dgf2"] Feb 23 19:21:31 crc kubenswrapper[4768]: W0223 19:21:31.669569 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b682bf3_2092_48e5_acb9_8b2c1eef743a.slice/crio-7f3216e27000439e44e83209a5e8b5cf79edd64c08344a761691e54b75797747 WatchSource:0}: Error finding container 7f3216e27000439e44e83209a5e8b5cf79edd64c08344a761691e54b75797747: Status 404 returned error can't find the container with id 7f3216e27000439e44e83209a5e8b5cf79edd64c08344a761691e54b75797747 Feb 23 19:21:31 crc kubenswrapper[4768]: I0223 19:21:31.972883 4768 generic.go:334] "Generic (PLEG): container finished" podID="4b682bf3-2092-48e5-acb9-8b2c1eef743a" containerID="3295aea205372db0866a36ec7c28597f4792325f123275f124b2426e874f392a" exitCode=0 Feb 23 19:21:31 crc kubenswrapper[4768]: I0223 19:21:31.974260 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dgf2" event={"ID":"4b682bf3-2092-48e5-acb9-8b2c1eef743a","Type":"ContainerDied","Data":"3295aea205372db0866a36ec7c28597f4792325f123275f124b2426e874f392a"} Feb 23 19:21:31 crc kubenswrapper[4768]: I0223 19:21:31.974328 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dgf2" event={"ID":"4b682bf3-2092-48e5-acb9-8b2c1eef743a","Type":"ContainerStarted","Data":"7f3216e27000439e44e83209a5e8b5cf79edd64c08344a761691e54b75797747"} Feb 23 19:21:33 crc kubenswrapper[4768]: I0223 19:21:33.002864 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dgf2" event={"ID":"4b682bf3-2092-48e5-acb9-8b2c1eef743a","Type":"ContainerDied","Data":"7f8fc1d90cc7ae64cd4442a7d93fe774afadc460cd588cc6cc87795185e29a28"} Feb 23 19:21:33 crc kubenswrapper[4768]: I0223 19:21:33.002744 4768 generic.go:334] "Generic (PLEG): container finished" podID="4b682bf3-2092-48e5-acb9-8b2c1eef743a" containerID="7f8fc1d90cc7ae64cd4442a7d93fe774afadc460cd588cc6cc87795185e29a28" exitCode=0 Feb 23 19:21:33 crc kubenswrapper[4768]: I0223 19:21:33.828441 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:33 crc kubenswrapper[4768]: I0223 19:21:33.828832 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:34 crc kubenswrapper[4768]: I0223 19:21:34.017059 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dgf2" event={"ID":"4b682bf3-2092-48e5-acb9-8b2c1eef743a","Type":"ContainerStarted","Data":"061d0d9d0e27f5aece01aa75042a3c2ca09c8dfba49515bf65bc74c1a65277b2"} Feb 23 19:21:34 crc kubenswrapper[4768]: I0223 19:21:34.043312 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6dgf2" podStartSLOduration=2.5481324389999997 podStartE2EDuration="4.043264568s" podCreationTimestamp="2026-02-23 19:21:30 +0000 UTC" firstStartedPulling="2026-02-23 19:21:31.975607515 +0000 UTC m=+2887.366093315" lastFinishedPulling="2026-02-23 19:21:33.470739644 +0000 UTC m=+2888.861225444" observedRunningTime="2026-02-23 19:21:34.03450208 +0000 UTC m=+2889.424987870" watchObservedRunningTime="2026-02-23 19:21:34.043264568 +0000 UTC m=+2889.433750368" Feb 23 19:21:34 crc kubenswrapper[4768]: I0223 19:21:34.897172 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vtkmn" podUID="96372ad0-b596-420b-a8bc-b4258526593b" containerName="registry-server" probeResult="failure" output=< Feb 23 19:21:34 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 23 19:21:34 crc kubenswrapper[4768]: > Feb 23 19:21:39 crc kubenswrapper[4768]: I0223 19:21:39.544981 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:21:39 crc kubenswrapper[4768]: I0223 19:21:39.546362 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:21:41 crc kubenswrapper[4768]: I0223 19:21:41.205774 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:41 crc kubenswrapper[4768]: I0223 19:21:41.206212 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:41 crc kubenswrapper[4768]: I0223 19:21:41.265779 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:42 crc kubenswrapper[4768]: I0223 19:21:42.165636 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:42 crc kubenswrapper[4768]: I0223 19:21:42.245171 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dgf2"] Feb 23 19:21:43 crc kubenswrapper[4768]: I0223 19:21:43.879830 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:43 crc kubenswrapper[4768]: I0223 19:21:43.963623 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:44 crc kubenswrapper[4768]: I0223 19:21:44.118500 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6dgf2" podUID="4b682bf3-2092-48e5-acb9-8b2c1eef743a" containerName="registry-server" containerID="cri-o://061d0d9d0e27f5aece01aa75042a3c2ca09c8dfba49515bf65bc74c1a65277b2" gracePeriod=2 Feb 23 19:21:44 crc kubenswrapper[4768]: I0223 19:21:44.640100 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:44 crc kubenswrapper[4768]: I0223 19:21:44.766425 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kjh2\" (UniqueName: \"kubernetes.io/projected/4b682bf3-2092-48e5-acb9-8b2c1eef743a-kube-api-access-7kjh2\") pod \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\" (UID: \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\") " Feb 23 19:21:44 crc kubenswrapper[4768]: I0223 19:21:44.766610 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b682bf3-2092-48e5-acb9-8b2c1eef743a-catalog-content\") pod \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\" (UID: \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\") " Feb 23 19:21:44 crc kubenswrapper[4768]: I0223 19:21:44.766653 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b682bf3-2092-48e5-acb9-8b2c1eef743a-utilities\") pod \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\" (UID: \"4b682bf3-2092-48e5-acb9-8b2c1eef743a\") " Feb 23 19:21:44 crc kubenswrapper[4768]: I0223 19:21:44.767432 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b682bf3-2092-48e5-acb9-8b2c1eef743a-utilities" (OuterVolumeSpecName: "utilities") pod "4b682bf3-2092-48e5-acb9-8b2c1eef743a" (UID: "4b682bf3-2092-48e5-acb9-8b2c1eef743a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:21:44 crc kubenswrapper[4768]: I0223 19:21:44.777154 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b682bf3-2092-48e5-acb9-8b2c1eef743a-kube-api-access-7kjh2" (OuterVolumeSpecName: "kube-api-access-7kjh2") pod "4b682bf3-2092-48e5-acb9-8b2c1eef743a" (UID: "4b682bf3-2092-48e5-acb9-8b2c1eef743a"). InnerVolumeSpecName "kube-api-access-7kjh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:21:44 crc kubenswrapper[4768]: I0223 19:21:44.790776 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b682bf3-2092-48e5-acb9-8b2c1eef743a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b682bf3-2092-48e5-acb9-8b2c1eef743a" (UID: "4b682bf3-2092-48e5-acb9-8b2c1eef743a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:21:44 crc kubenswrapper[4768]: I0223 19:21:44.869523 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b682bf3-2092-48e5-acb9-8b2c1eef743a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:21:44 crc kubenswrapper[4768]: I0223 19:21:44.869577 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b682bf3-2092-48e5-acb9-8b2c1eef743a-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:21:44 crc kubenswrapper[4768]: I0223 19:21:44.869592 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kjh2\" (UniqueName: \"kubernetes.io/projected/4b682bf3-2092-48e5-acb9-8b2c1eef743a-kube-api-access-7kjh2\") on node \"crc\" DevicePath \"\"" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.105965 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vtkmn"] Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.130057 4768 generic.go:334] "Generic (PLEG): container finished" podID="4b682bf3-2092-48e5-acb9-8b2c1eef743a" containerID="061d0d9d0e27f5aece01aa75042a3c2ca09c8dfba49515bf65bc74c1a65277b2" exitCode=0 Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.130135 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dgf2" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.130134 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dgf2" event={"ID":"4b682bf3-2092-48e5-acb9-8b2c1eef743a","Type":"ContainerDied","Data":"061d0d9d0e27f5aece01aa75042a3c2ca09c8dfba49515bf65bc74c1a65277b2"} Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.130312 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dgf2" event={"ID":"4b682bf3-2092-48e5-acb9-8b2c1eef743a","Type":"ContainerDied","Data":"7f3216e27000439e44e83209a5e8b5cf79edd64c08344a761691e54b75797747"} Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.130342 4768 scope.go:117] "RemoveContainer" containerID="061d0d9d0e27f5aece01aa75042a3c2ca09c8dfba49515bf65bc74c1a65277b2" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.130739 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vtkmn" podUID="96372ad0-b596-420b-a8bc-b4258526593b" containerName="registry-server" containerID="cri-o://f16673041a7c6e49e89335efeee91bac83fd09f10444f07e0f65b3429781a7b8" gracePeriod=2 Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.164651 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dgf2"] Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.174443 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dgf2"] Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.174613 4768 scope.go:117] "RemoveContainer" containerID="7f8fc1d90cc7ae64cd4442a7d93fe774afadc460cd588cc6cc87795185e29a28" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.192112 4768 scope.go:117] "RemoveContainer" containerID="3295aea205372db0866a36ec7c28597f4792325f123275f124b2426e874f392a" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.325774 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b682bf3-2092-48e5-acb9-8b2c1eef743a" path="/var/lib/kubelet/pods/4b682bf3-2092-48e5-acb9-8b2c1eef743a/volumes" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.355387 4768 scope.go:117] "RemoveContainer" containerID="061d0d9d0e27f5aece01aa75042a3c2ca09c8dfba49515bf65bc74c1a65277b2" Feb 23 19:21:45 crc kubenswrapper[4768]: E0223 19:21:45.356156 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"061d0d9d0e27f5aece01aa75042a3c2ca09c8dfba49515bf65bc74c1a65277b2\": container with ID starting with 061d0d9d0e27f5aece01aa75042a3c2ca09c8dfba49515bf65bc74c1a65277b2 not found: ID does not exist" containerID="061d0d9d0e27f5aece01aa75042a3c2ca09c8dfba49515bf65bc74c1a65277b2" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.356228 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"061d0d9d0e27f5aece01aa75042a3c2ca09c8dfba49515bf65bc74c1a65277b2"} err="failed to get container status \"061d0d9d0e27f5aece01aa75042a3c2ca09c8dfba49515bf65bc74c1a65277b2\": rpc error: code = NotFound desc = could not find container \"061d0d9d0e27f5aece01aa75042a3c2ca09c8dfba49515bf65bc74c1a65277b2\": container with ID starting with 061d0d9d0e27f5aece01aa75042a3c2ca09c8dfba49515bf65bc74c1a65277b2 not found: ID does not exist" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.356320 4768 scope.go:117] "RemoveContainer" containerID="7f8fc1d90cc7ae64cd4442a7d93fe774afadc460cd588cc6cc87795185e29a28" Feb 23 19:21:45 crc kubenswrapper[4768]: E0223 19:21:45.356871 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f8fc1d90cc7ae64cd4442a7d93fe774afadc460cd588cc6cc87795185e29a28\": container with ID starting with 7f8fc1d90cc7ae64cd4442a7d93fe774afadc460cd588cc6cc87795185e29a28 not found: ID does not exist" containerID="7f8fc1d90cc7ae64cd4442a7d93fe774afadc460cd588cc6cc87795185e29a28" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.356921 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8fc1d90cc7ae64cd4442a7d93fe774afadc460cd588cc6cc87795185e29a28"} err="failed to get container status \"7f8fc1d90cc7ae64cd4442a7d93fe774afadc460cd588cc6cc87795185e29a28\": rpc error: code = NotFound desc = could not find container \"7f8fc1d90cc7ae64cd4442a7d93fe774afadc460cd588cc6cc87795185e29a28\": container with ID starting with 7f8fc1d90cc7ae64cd4442a7d93fe774afadc460cd588cc6cc87795185e29a28 not found: ID does not exist" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.356944 4768 scope.go:117] "RemoveContainer" containerID="3295aea205372db0866a36ec7c28597f4792325f123275f124b2426e874f392a" Feb 23 19:21:45 crc kubenswrapper[4768]: E0223 19:21:45.357377 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3295aea205372db0866a36ec7c28597f4792325f123275f124b2426e874f392a\": container with ID starting with 3295aea205372db0866a36ec7c28597f4792325f123275f124b2426e874f392a not found: ID does not exist" containerID="3295aea205372db0866a36ec7c28597f4792325f123275f124b2426e874f392a" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.357440 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3295aea205372db0866a36ec7c28597f4792325f123275f124b2426e874f392a"} err="failed to get container status \"3295aea205372db0866a36ec7c28597f4792325f123275f124b2426e874f392a\": rpc error: code = NotFound desc = could not find container \"3295aea205372db0866a36ec7c28597f4792325f123275f124b2426e874f392a\": container with ID starting with 3295aea205372db0866a36ec7c28597f4792325f123275f124b2426e874f392a not found: ID does not exist" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.607426 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.689269 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96372ad0-b596-420b-a8bc-b4258526593b-catalog-content\") pod \"96372ad0-b596-420b-a8bc-b4258526593b\" (UID: \"96372ad0-b596-420b-a8bc-b4258526593b\") " Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.689762 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96372ad0-b596-420b-a8bc-b4258526593b-utilities\") pod \"96372ad0-b596-420b-a8bc-b4258526593b\" (UID: \"96372ad0-b596-420b-a8bc-b4258526593b\") " Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.690088 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b27rc\" (UniqueName: \"kubernetes.io/projected/96372ad0-b596-420b-a8bc-b4258526593b-kube-api-access-b27rc\") pod \"96372ad0-b596-420b-a8bc-b4258526593b\" (UID: \"96372ad0-b596-420b-a8bc-b4258526593b\") " Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.692408 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96372ad0-b596-420b-a8bc-b4258526593b-utilities" (OuterVolumeSpecName: "utilities") pod "96372ad0-b596-420b-a8bc-b4258526593b" (UID: "96372ad0-b596-420b-a8bc-b4258526593b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.701516 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96372ad0-b596-420b-a8bc-b4258526593b-kube-api-access-b27rc" (OuterVolumeSpecName: "kube-api-access-b27rc") pod "96372ad0-b596-420b-a8bc-b4258526593b" (UID: "96372ad0-b596-420b-a8bc-b4258526593b"). InnerVolumeSpecName "kube-api-access-b27rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.794747 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b27rc\" (UniqueName: \"kubernetes.io/projected/96372ad0-b596-420b-a8bc-b4258526593b-kube-api-access-b27rc\") on node \"crc\" DevicePath \"\"" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.794828 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96372ad0-b596-420b-a8bc-b4258526593b-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.820369 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96372ad0-b596-420b-a8bc-b4258526593b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96372ad0-b596-420b-a8bc-b4258526593b" (UID: "96372ad0-b596-420b-a8bc-b4258526593b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:21:45 crc kubenswrapper[4768]: I0223 19:21:45.897387 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96372ad0-b596-420b-a8bc-b4258526593b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.150861 4768 generic.go:334] "Generic (PLEG): container finished" podID="96372ad0-b596-420b-a8bc-b4258526593b" containerID="f16673041a7c6e49e89335efeee91bac83fd09f10444f07e0f65b3429781a7b8" exitCode=0 Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.150925 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vtkmn" event={"ID":"96372ad0-b596-420b-a8bc-b4258526593b","Type":"ContainerDied","Data":"f16673041a7c6e49e89335efeee91bac83fd09f10444f07e0f65b3429781a7b8"} Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.150978 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vtkmn" event={"ID":"96372ad0-b596-420b-a8bc-b4258526593b","Type":"ContainerDied","Data":"dcfe8737ff0d23315f4033f856f6dc26c1fadc4a3d74383e9641d34caa48ae6a"} Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.151000 4768 scope.go:117] "RemoveContainer" containerID="f16673041a7c6e49e89335efeee91bac83fd09f10444f07e0f65b3429781a7b8" Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.152521 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vtkmn" Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.178704 4768 scope.go:117] "RemoveContainer" containerID="b907bf3f3386cf76e7b5a39b710223a59a324061d8c7416af8e027e44f4d2f63" Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.207782 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vtkmn"] Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.218285 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vtkmn"] Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.224523 4768 scope.go:117] "RemoveContainer" containerID="07df8cc5d84993b4dcc02b17ea64aecaac5db497b60ca92bea00fd9305e66781" Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.266559 4768 scope.go:117] "RemoveContainer" containerID="f16673041a7c6e49e89335efeee91bac83fd09f10444f07e0f65b3429781a7b8" Feb 23 19:21:46 crc kubenswrapper[4768]: E0223 19:21:46.267331 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f16673041a7c6e49e89335efeee91bac83fd09f10444f07e0f65b3429781a7b8\": container with ID starting with f16673041a7c6e49e89335efeee91bac83fd09f10444f07e0f65b3429781a7b8 not found: ID does not exist" containerID="f16673041a7c6e49e89335efeee91bac83fd09f10444f07e0f65b3429781a7b8" Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.267382 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f16673041a7c6e49e89335efeee91bac83fd09f10444f07e0f65b3429781a7b8"} err="failed to get container status \"f16673041a7c6e49e89335efeee91bac83fd09f10444f07e0f65b3429781a7b8\": rpc error: code = NotFound desc = could not find container \"f16673041a7c6e49e89335efeee91bac83fd09f10444f07e0f65b3429781a7b8\": container with ID starting with f16673041a7c6e49e89335efeee91bac83fd09f10444f07e0f65b3429781a7b8 not found: ID does not exist" Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.267436 4768 scope.go:117] "RemoveContainer" containerID="b907bf3f3386cf76e7b5a39b710223a59a324061d8c7416af8e027e44f4d2f63" Feb 23 19:21:46 crc kubenswrapper[4768]: E0223 19:21:46.269336 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b907bf3f3386cf76e7b5a39b710223a59a324061d8c7416af8e027e44f4d2f63\": container with ID starting with b907bf3f3386cf76e7b5a39b710223a59a324061d8c7416af8e027e44f4d2f63 not found: ID does not exist" containerID="b907bf3f3386cf76e7b5a39b710223a59a324061d8c7416af8e027e44f4d2f63" Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.269379 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b907bf3f3386cf76e7b5a39b710223a59a324061d8c7416af8e027e44f4d2f63"} err="failed to get container status \"b907bf3f3386cf76e7b5a39b710223a59a324061d8c7416af8e027e44f4d2f63\": rpc error: code = NotFound desc = could not find container \"b907bf3f3386cf76e7b5a39b710223a59a324061d8c7416af8e027e44f4d2f63\": container with ID starting with b907bf3f3386cf76e7b5a39b710223a59a324061d8c7416af8e027e44f4d2f63 not found: ID does not exist" Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.269406 4768 scope.go:117] "RemoveContainer" containerID="07df8cc5d84993b4dcc02b17ea64aecaac5db497b60ca92bea00fd9305e66781" Feb 23 19:21:46 crc kubenswrapper[4768]: E0223 19:21:46.271627 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07df8cc5d84993b4dcc02b17ea64aecaac5db497b60ca92bea00fd9305e66781\": container with ID starting with 07df8cc5d84993b4dcc02b17ea64aecaac5db497b60ca92bea00fd9305e66781 not found: ID does not exist" containerID="07df8cc5d84993b4dcc02b17ea64aecaac5db497b60ca92bea00fd9305e66781" Feb 23 19:21:46 crc kubenswrapper[4768]: I0223 19:21:46.271669 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07df8cc5d84993b4dcc02b17ea64aecaac5db497b60ca92bea00fd9305e66781"} err="failed to get container status \"07df8cc5d84993b4dcc02b17ea64aecaac5db497b60ca92bea00fd9305e66781\": rpc error: code = NotFound desc = could not find container \"07df8cc5d84993b4dcc02b17ea64aecaac5db497b60ca92bea00fd9305e66781\": container with ID starting with 07df8cc5d84993b4dcc02b17ea64aecaac5db497b60ca92bea00fd9305e66781 not found: ID does not exist" Feb 23 19:21:47 crc kubenswrapper[4768]: I0223 19:21:47.326656 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96372ad0-b596-420b-a8bc-b4258526593b" path="/var/lib/kubelet/pods/96372ad0-b596-420b-a8bc-b4258526593b/volumes" Feb 23 19:22:09 crc kubenswrapper[4768]: I0223 19:22:09.544941 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:22:09 crc kubenswrapper[4768]: I0223 19:22:09.546052 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:22:39 crc kubenswrapper[4768]: I0223 19:22:39.546403 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:22:39 crc kubenswrapper[4768]: I0223 19:22:39.547185 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:22:39 crc kubenswrapper[4768]: I0223 19:22:39.547310 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 19:22:39 crc kubenswrapper[4768]: I0223 19:22:39.548634 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 19:22:39 crc kubenswrapper[4768]: I0223 19:22:39.548741 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" gracePeriod=600 Feb 23 19:22:39 crc kubenswrapper[4768]: E0223 19:22:39.694622 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:22:39 crc kubenswrapper[4768]: I0223 19:22:39.831927 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" exitCode=0 Feb 23 19:22:39 crc kubenswrapper[4768]: I0223 19:22:39.831980 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491"} Feb 23 19:22:39 crc kubenswrapper[4768]: I0223 19:22:39.832030 4768 scope.go:117] "RemoveContainer" containerID="cf62da9c1773c95b8b67e32cbd37e0469e907898a6d900c0ab50fb2577dbc0fd" Feb 23 19:22:39 crc kubenswrapper[4768]: I0223 19:22:39.832793 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:22:39 crc kubenswrapper[4768]: E0223 19:22:39.833640 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:22:51 crc kubenswrapper[4768]: I0223 19:22:51.308975 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:22:51 crc kubenswrapper[4768]: E0223 19:22:51.310238 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:23:03 crc kubenswrapper[4768]: I0223 19:23:03.308430 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:23:03 crc kubenswrapper[4768]: E0223 19:23:03.309675 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:23:17 crc kubenswrapper[4768]: I0223 19:23:17.307596 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:23:17 crc kubenswrapper[4768]: E0223 19:23:17.308579 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:23:28 crc kubenswrapper[4768]: I0223 19:23:28.308402 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:23:28 crc kubenswrapper[4768]: E0223 19:23:28.312303 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:23:39 crc kubenswrapper[4768]: I0223 19:23:39.308405 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:23:39 crc kubenswrapper[4768]: E0223 19:23:39.309382 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:23:50 crc kubenswrapper[4768]: I0223 19:23:50.308127 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:23:50 crc kubenswrapper[4768]: E0223 19:23:50.309711 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:24:05 crc kubenswrapper[4768]: I0223 19:24:05.316506 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:24:05 crc kubenswrapper[4768]: E0223 19:24:05.318459 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.308930 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fmwk7"] Feb 23 19:24:06 crc kubenswrapper[4768]: E0223 19:24:06.309556 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b682bf3-2092-48e5-acb9-8b2c1eef743a" containerName="registry-server" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.309577 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b682bf3-2092-48e5-acb9-8b2c1eef743a" containerName="registry-server" Feb 23 19:24:06 crc kubenswrapper[4768]: E0223 19:24:06.309601 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96372ad0-b596-420b-a8bc-b4258526593b" containerName="registry-server" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.309608 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="96372ad0-b596-420b-a8bc-b4258526593b" containerName="registry-server" Feb 23 19:24:06 crc kubenswrapper[4768]: E0223 19:24:06.309624 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b682bf3-2092-48e5-acb9-8b2c1eef743a" containerName="extract-content" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.309631 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b682bf3-2092-48e5-acb9-8b2c1eef743a" containerName="extract-content" Feb 23 19:24:06 crc kubenswrapper[4768]: E0223 19:24:06.309639 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96372ad0-b596-420b-a8bc-b4258526593b" containerName="extract-content" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.309646 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="96372ad0-b596-420b-a8bc-b4258526593b" containerName="extract-content" Feb 23 19:24:06 crc kubenswrapper[4768]: E0223 19:24:06.309670 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b682bf3-2092-48e5-acb9-8b2c1eef743a" containerName="extract-utilities" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.309688 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b682bf3-2092-48e5-acb9-8b2c1eef743a" containerName="extract-utilities" Feb 23 19:24:06 crc kubenswrapper[4768]: E0223 19:24:06.309712 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96372ad0-b596-420b-a8bc-b4258526593b" containerName="extract-utilities" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.309721 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="96372ad0-b596-420b-a8bc-b4258526593b" containerName="extract-utilities" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.309966 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b682bf3-2092-48e5-acb9-8b2c1eef743a" containerName="registry-server" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.309989 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="96372ad0-b596-420b-a8bc-b4258526593b" containerName="registry-server" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.311644 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.322949 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmwk7"] Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.373719 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f16972c-18b6-4073-9779-bdce10aa7f45-catalog-content\") pod \"community-operators-fmwk7\" (UID: \"8f16972c-18b6-4073-9779-bdce10aa7f45\") " pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.373904 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f16972c-18b6-4073-9779-bdce10aa7f45-utilities\") pod \"community-operators-fmwk7\" (UID: \"8f16972c-18b6-4073-9779-bdce10aa7f45\") " pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.373945 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-666qp\" (UniqueName: \"kubernetes.io/projected/8f16972c-18b6-4073-9779-bdce10aa7f45-kube-api-access-666qp\") pod \"community-operators-fmwk7\" (UID: \"8f16972c-18b6-4073-9779-bdce10aa7f45\") " pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.476864 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f16972c-18b6-4073-9779-bdce10aa7f45-catalog-content\") pod \"community-operators-fmwk7\" (UID: \"8f16972c-18b6-4073-9779-bdce10aa7f45\") " pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.476996 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f16972c-18b6-4073-9779-bdce10aa7f45-utilities\") pod \"community-operators-fmwk7\" (UID: \"8f16972c-18b6-4073-9779-bdce10aa7f45\") " pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.477033 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-666qp\" (UniqueName: \"kubernetes.io/projected/8f16972c-18b6-4073-9779-bdce10aa7f45-kube-api-access-666qp\") pod \"community-operators-fmwk7\" (UID: \"8f16972c-18b6-4073-9779-bdce10aa7f45\") " pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.477626 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f16972c-18b6-4073-9779-bdce10aa7f45-catalog-content\") pod \"community-operators-fmwk7\" (UID: \"8f16972c-18b6-4073-9779-bdce10aa7f45\") " pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.477760 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f16972c-18b6-4073-9779-bdce10aa7f45-utilities\") pod \"community-operators-fmwk7\" (UID: \"8f16972c-18b6-4073-9779-bdce10aa7f45\") " pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.499654 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-666qp\" (UniqueName: \"kubernetes.io/projected/8f16972c-18b6-4073-9779-bdce10aa7f45-kube-api-access-666qp\") pod \"community-operators-fmwk7\" (UID: \"8f16972c-18b6-4073-9779-bdce10aa7f45\") " pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:06 crc kubenswrapper[4768]: I0223 19:24:06.654807 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:07 crc kubenswrapper[4768]: I0223 19:24:07.206167 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fmwk7"] Feb 23 19:24:07 crc kubenswrapper[4768]: I0223 19:24:07.827914 4768 generic.go:334] "Generic (PLEG): container finished" podID="8f16972c-18b6-4073-9779-bdce10aa7f45" containerID="7e8a47a13cb323f1f7f065a57902b977e7c7aeda5734ea8f97f46966a0e76941" exitCode=0 Feb 23 19:24:07 crc kubenswrapper[4768]: I0223 19:24:07.828153 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmwk7" event={"ID":"8f16972c-18b6-4073-9779-bdce10aa7f45","Type":"ContainerDied","Data":"7e8a47a13cb323f1f7f065a57902b977e7c7aeda5734ea8f97f46966a0e76941"} Feb 23 19:24:07 crc kubenswrapper[4768]: I0223 19:24:07.828575 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmwk7" event={"ID":"8f16972c-18b6-4073-9779-bdce10aa7f45","Type":"ContainerStarted","Data":"9b1ae1652909900380a2d0ae79696152da4d61c078d70b43005b16af26845b8e"} Feb 23 19:24:09 crc kubenswrapper[4768]: I0223 19:24:09.861396 4768 generic.go:334] "Generic (PLEG): container finished" podID="8f16972c-18b6-4073-9779-bdce10aa7f45" containerID="ac9ed3c0dd89f270e20502ec0a39f9d70851df44152ad58bd78efec74b6bf469" exitCode=0 Feb 23 19:24:09 crc kubenswrapper[4768]: I0223 19:24:09.861502 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmwk7" event={"ID":"8f16972c-18b6-4073-9779-bdce10aa7f45","Type":"ContainerDied","Data":"ac9ed3c0dd89f270e20502ec0a39f9d70851df44152ad58bd78efec74b6bf469"} Feb 23 19:24:10 crc kubenswrapper[4768]: I0223 19:24:10.874740 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmwk7" event={"ID":"8f16972c-18b6-4073-9779-bdce10aa7f45","Type":"ContainerStarted","Data":"e73698947ab87657a9eb1f0e47068b7992ef1e5b6986a90570ebbc048a2eb217"} Feb 23 19:24:10 crc kubenswrapper[4768]: I0223 19:24:10.904100 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fmwk7" podStartSLOduration=2.407679318 podStartE2EDuration="4.904070393s" podCreationTimestamp="2026-02-23 19:24:06 +0000 UTC" firstStartedPulling="2026-02-23 19:24:07.830787278 +0000 UTC m=+3043.221273088" lastFinishedPulling="2026-02-23 19:24:10.327178353 +0000 UTC m=+3045.717664163" observedRunningTime="2026-02-23 19:24:10.890874493 +0000 UTC m=+3046.281360313" watchObservedRunningTime="2026-02-23 19:24:10.904070393 +0000 UTC m=+3046.294556213" Feb 23 19:24:16 crc kubenswrapper[4768]: I0223 19:24:16.658635 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:16 crc kubenswrapper[4768]: I0223 19:24:16.659574 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:16 crc kubenswrapper[4768]: I0223 19:24:16.729005 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:16 crc kubenswrapper[4768]: I0223 19:24:16.996208 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:17 crc kubenswrapper[4768]: I0223 19:24:17.040854 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fmwk7"] Feb 23 19:24:18 crc kubenswrapper[4768]: I0223 19:24:18.974440 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fmwk7" podUID="8f16972c-18b6-4073-9779-bdce10aa7f45" containerName="registry-server" containerID="cri-o://e73698947ab87657a9eb1f0e47068b7992ef1e5b6986a90570ebbc048a2eb217" gracePeriod=2 Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.308204 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:24:19 crc kubenswrapper[4768]: E0223 19:24:19.308607 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.592726 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.728230 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f16972c-18b6-4073-9779-bdce10aa7f45-utilities\") pod \"8f16972c-18b6-4073-9779-bdce10aa7f45\" (UID: \"8f16972c-18b6-4073-9779-bdce10aa7f45\") " Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.728377 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-666qp\" (UniqueName: \"kubernetes.io/projected/8f16972c-18b6-4073-9779-bdce10aa7f45-kube-api-access-666qp\") pod \"8f16972c-18b6-4073-9779-bdce10aa7f45\" (UID: \"8f16972c-18b6-4073-9779-bdce10aa7f45\") " Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.728409 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f16972c-18b6-4073-9779-bdce10aa7f45-catalog-content\") pod \"8f16972c-18b6-4073-9779-bdce10aa7f45\" (UID: \"8f16972c-18b6-4073-9779-bdce10aa7f45\") " Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.729193 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f16972c-18b6-4073-9779-bdce10aa7f45-utilities" (OuterVolumeSpecName: "utilities") pod "8f16972c-18b6-4073-9779-bdce10aa7f45" (UID: "8f16972c-18b6-4073-9779-bdce10aa7f45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.737526 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f16972c-18b6-4073-9779-bdce10aa7f45-kube-api-access-666qp" (OuterVolumeSpecName: "kube-api-access-666qp") pod "8f16972c-18b6-4073-9779-bdce10aa7f45" (UID: "8f16972c-18b6-4073-9779-bdce10aa7f45"). InnerVolumeSpecName "kube-api-access-666qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.831655 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f16972c-18b6-4073-9779-bdce10aa7f45-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.831698 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-666qp\" (UniqueName: \"kubernetes.io/projected/8f16972c-18b6-4073-9779-bdce10aa7f45-kube-api-access-666qp\") on node \"crc\" DevicePath \"\"" Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.842657 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f16972c-18b6-4073-9779-bdce10aa7f45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f16972c-18b6-4073-9779-bdce10aa7f45" (UID: "8f16972c-18b6-4073-9779-bdce10aa7f45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.933861 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f16972c-18b6-4073-9779-bdce10aa7f45-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.991030 4768 generic.go:334] "Generic (PLEG): container finished" podID="8f16972c-18b6-4073-9779-bdce10aa7f45" containerID="e73698947ab87657a9eb1f0e47068b7992ef1e5b6986a90570ebbc048a2eb217" exitCode=0 Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.991097 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmwk7" event={"ID":"8f16972c-18b6-4073-9779-bdce10aa7f45","Type":"ContainerDied","Data":"e73698947ab87657a9eb1f0e47068b7992ef1e5b6986a90570ebbc048a2eb217"} Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.991669 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fmwk7" event={"ID":"8f16972c-18b6-4073-9779-bdce10aa7f45","Type":"ContainerDied","Data":"9b1ae1652909900380a2d0ae79696152da4d61c078d70b43005b16af26845b8e"} Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.991704 4768 scope.go:117] "RemoveContainer" containerID="e73698947ab87657a9eb1f0e47068b7992ef1e5b6986a90570ebbc048a2eb217" Feb 23 19:24:19 crc kubenswrapper[4768]: I0223 19:24:19.991149 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fmwk7" Feb 23 19:24:20 crc kubenswrapper[4768]: I0223 19:24:20.027157 4768 scope.go:117] "RemoveContainer" containerID="ac9ed3c0dd89f270e20502ec0a39f9d70851df44152ad58bd78efec74b6bf469" Feb 23 19:24:20 crc kubenswrapper[4768]: I0223 19:24:20.035991 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fmwk7"] Feb 23 19:24:20 crc kubenswrapper[4768]: I0223 19:24:20.060045 4768 scope.go:117] "RemoveContainer" containerID="7e8a47a13cb323f1f7f065a57902b977e7c7aeda5734ea8f97f46966a0e76941" Feb 23 19:24:20 crc kubenswrapper[4768]: I0223 19:24:20.061871 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fmwk7"] Feb 23 19:24:20 crc kubenswrapper[4768]: I0223 19:24:20.100185 4768 scope.go:117] "RemoveContainer" containerID="e73698947ab87657a9eb1f0e47068b7992ef1e5b6986a90570ebbc048a2eb217" Feb 23 19:24:20 crc kubenswrapper[4768]: E0223 19:24:20.100817 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73698947ab87657a9eb1f0e47068b7992ef1e5b6986a90570ebbc048a2eb217\": container with ID starting with e73698947ab87657a9eb1f0e47068b7992ef1e5b6986a90570ebbc048a2eb217 not found: ID does not exist" containerID="e73698947ab87657a9eb1f0e47068b7992ef1e5b6986a90570ebbc048a2eb217" Feb 23 19:24:20 crc kubenswrapper[4768]: I0223 19:24:20.100869 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73698947ab87657a9eb1f0e47068b7992ef1e5b6986a90570ebbc048a2eb217"} err="failed to get container status \"e73698947ab87657a9eb1f0e47068b7992ef1e5b6986a90570ebbc048a2eb217\": rpc error: code = NotFound desc = could not find container \"e73698947ab87657a9eb1f0e47068b7992ef1e5b6986a90570ebbc048a2eb217\": container with ID starting with e73698947ab87657a9eb1f0e47068b7992ef1e5b6986a90570ebbc048a2eb217 not found: ID does not exist" Feb 23 19:24:20 crc kubenswrapper[4768]: I0223 19:24:20.100900 4768 scope.go:117] "RemoveContainer" containerID="ac9ed3c0dd89f270e20502ec0a39f9d70851df44152ad58bd78efec74b6bf469" Feb 23 19:24:20 crc kubenswrapper[4768]: E0223 19:24:20.101466 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac9ed3c0dd89f270e20502ec0a39f9d70851df44152ad58bd78efec74b6bf469\": container with ID starting with ac9ed3c0dd89f270e20502ec0a39f9d70851df44152ad58bd78efec74b6bf469 not found: ID does not exist" containerID="ac9ed3c0dd89f270e20502ec0a39f9d70851df44152ad58bd78efec74b6bf469" Feb 23 19:24:20 crc kubenswrapper[4768]: I0223 19:24:20.101510 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9ed3c0dd89f270e20502ec0a39f9d70851df44152ad58bd78efec74b6bf469"} err="failed to get container status \"ac9ed3c0dd89f270e20502ec0a39f9d70851df44152ad58bd78efec74b6bf469\": rpc error: code = NotFound desc = could not find container \"ac9ed3c0dd89f270e20502ec0a39f9d70851df44152ad58bd78efec74b6bf469\": container with ID starting with ac9ed3c0dd89f270e20502ec0a39f9d70851df44152ad58bd78efec74b6bf469 not found: ID does not exist" Feb 23 19:24:20 crc kubenswrapper[4768]: I0223 19:24:20.101536 4768 scope.go:117] "RemoveContainer" containerID="7e8a47a13cb323f1f7f065a57902b977e7c7aeda5734ea8f97f46966a0e76941" Feb 23 19:24:20 crc kubenswrapper[4768]: E0223 19:24:20.101828 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e8a47a13cb323f1f7f065a57902b977e7c7aeda5734ea8f97f46966a0e76941\": container with ID starting with 7e8a47a13cb323f1f7f065a57902b977e7c7aeda5734ea8f97f46966a0e76941 not found: ID does not exist" containerID="7e8a47a13cb323f1f7f065a57902b977e7c7aeda5734ea8f97f46966a0e76941" Feb 23 19:24:20 crc kubenswrapper[4768]: I0223 19:24:20.101852 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e8a47a13cb323f1f7f065a57902b977e7c7aeda5734ea8f97f46966a0e76941"} err="failed to get container status \"7e8a47a13cb323f1f7f065a57902b977e7c7aeda5734ea8f97f46966a0e76941\": rpc error: code = NotFound desc = could not find container \"7e8a47a13cb323f1f7f065a57902b977e7c7aeda5734ea8f97f46966a0e76941\": container with ID starting with 7e8a47a13cb323f1f7f065a57902b977e7c7aeda5734ea8f97f46966a0e76941 not found: ID does not exist" Feb 23 19:24:21 crc kubenswrapper[4768]: I0223 19:24:21.326689 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f16972c-18b6-4073-9779-bdce10aa7f45" path="/var/lib/kubelet/pods/8f16972c-18b6-4073-9779-bdce10aa7f45/volumes" Feb 23 19:24:31 crc kubenswrapper[4768]: I0223 19:24:31.307130 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:24:31 crc kubenswrapper[4768]: E0223 19:24:31.308163 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:24:45 crc kubenswrapper[4768]: I0223 19:24:45.314286 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:24:45 crc kubenswrapper[4768]: E0223 19:24:45.315355 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:24:57 crc kubenswrapper[4768]: I0223 19:24:57.308541 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:24:57 crc kubenswrapper[4768]: E0223 19:24:57.311145 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:25:12 crc kubenswrapper[4768]: I0223 19:25:12.308355 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:25:12 crc kubenswrapper[4768]: E0223 19:25:12.309695 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.738209 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6qrdj"] Feb 23 19:25:19 crc kubenswrapper[4768]: E0223 19:25:19.739348 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f16972c-18b6-4073-9779-bdce10aa7f45" containerName="extract-content" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.739361 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f16972c-18b6-4073-9779-bdce10aa7f45" containerName="extract-content" Feb 23 19:25:19 crc kubenswrapper[4768]: E0223 19:25:19.739372 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f16972c-18b6-4073-9779-bdce10aa7f45" containerName="extract-utilities" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.739379 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f16972c-18b6-4073-9779-bdce10aa7f45" containerName="extract-utilities" Feb 23 19:25:19 crc kubenswrapper[4768]: E0223 19:25:19.739405 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f16972c-18b6-4073-9779-bdce10aa7f45" containerName="registry-server" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.739410 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f16972c-18b6-4073-9779-bdce10aa7f45" containerName="registry-server" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.739611 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f16972c-18b6-4073-9779-bdce10aa7f45" containerName="registry-server" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.740934 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.749413 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qrdj"] Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.798331 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxm2k\" (UniqueName: \"kubernetes.io/projected/9532555c-abb5-4c0d-b9c6-67b0c956c407-kube-api-access-lxm2k\") pod \"certified-operators-6qrdj\" (UID: \"9532555c-abb5-4c0d-b9c6-67b0c956c407\") " pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.798379 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9532555c-abb5-4c0d-b9c6-67b0c956c407-catalog-content\") pod \"certified-operators-6qrdj\" (UID: \"9532555c-abb5-4c0d-b9c6-67b0c956c407\") " pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.798421 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9532555c-abb5-4c0d-b9c6-67b0c956c407-utilities\") pod \"certified-operators-6qrdj\" (UID: \"9532555c-abb5-4c0d-b9c6-67b0c956c407\") " pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.899925 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxm2k\" (UniqueName: \"kubernetes.io/projected/9532555c-abb5-4c0d-b9c6-67b0c956c407-kube-api-access-lxm2k\") pod \"certified-operators-6qrdj\" (UID: \"9532555c-abb5-4c0d-b9c6-67b0c956c407\") " pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.899987 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9532555c-abb5-4c0d-b9c6-67b0c956c407-catalog-content\") pod \"certified-operators-6qrdj\" (UID: \"9532555c-abb5-4c0d-b9c6-67b0c956c407\") " pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.900016 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9532555c-abb5-4c0d-b9c6-67b0c956c407-utilities\") pod \"certified-operators-6qrdj\" (UID: \"9532555c-abb5-4c0d-b9c6-67b0c956c407\") " pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.900536 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9532555c-abb5-4c0d-b9c6-67b0c956c407-utilities\") pod \"certified-operators-6qrdj\" (UID: \"9532555c-abb5-4c0d-b9c6-67b0c956c407\") " pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.900580 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9532555c-abb5-4c0d-b9c6-67b0c956c407-catalog-content\") pod \"certified-operators-6qrdj\" (UID: \"9532555c-abb5-4c0d-b9c6-67b0c956c407\") " pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:19 crc kubenswrapper[4768]: I0223 19:25:19.925417 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxm2k\" (UniqueName: \"kubernetes.io/projected/9532555c-abb5-4c0d-b9c6-67b0c956c407-kube-api-access-lxm2k\") pod \"certified-operators-6qrdj\" (UID: \"9532555c-abb5-4c0d-b9c6-67b0c956c407\") " pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:20 crc kubenswrapper[4768]: I0223 19:25:20.104440 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:20 crc kubenswrapper[4768]: I0223 19:25:20.641639 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qrdj"] Feb 23 19:25:20 crc kubenswrapper[4768]: I0223 19:25:20.678133 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qrdj" event={"ID":"9532555c-abb5-4c0d-b9c6-67b0c956c407","Type":"ContainerStarted","Data":"7ef2ba56a6ecd1f0ab34e9bcc640a2997a90b0e7db5cef33afb4f32d7396a051"} Feb 23 19:25:21 crc kubenswrapper[4768]: I0223 19:25:21.690084 4768 generic.go:334] "Generic (PLEG): container finished" podID="9532555c-abb5-4c0d-b9c6-67b0c956c407" containerID="1e4d961631cfeb5f4551d3dd1264dbaed28320c60f0373aee13573471fd7a5ed" exitCode=0 Feb 23 19:25:21 crc kubenswrapper[4768]: I0223 19:25:21.690185 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qrdj" event={"ID":"9532555c-abb5-4c0d-b9c6-67b0c956c407","Type":"ContainerDied","Data":"1e4d961631cfeb5f4551d3dd1264dbaed28320c60f0373aee13573471fd7a5ed"} Feb 23 19:25:22 crc kubenswrapper[4768]: I0223 19:25:22.700832 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qrdj" event={"ID":"9532555c-abb5-4c0d-b9c6-67b0c956c407","Type":"ContainerStarted","Data":"6cd0cecf5ecdf8fd27e199bf3d003e82b0028c40ef9c5687b0054d09b250e1b4"} Feb 23 19:25:23 crc kubenswrapper[4768]: I0223 19:25:23.713671 4768 generic.go:334] "Generic (PLEG): container finished" podID="9532555c-abb5-4c0d-b9c6-67b0c956c407" containerID="6cd0cecf5ecdf8fd27e199bf3d003e82b0028c40ef9c5687b0054d09b250e1b4" exitCode=0 Feb 23 19:25:23 crc kubenswrapper[4768]: I0223 19:25:23.713767 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qrdj" event={"ID":"9532555c-abb5-4c0d-b9c6-67b0c956c407","Type":"ContainerDied","Data":"6cd0cecf5ecdf8fd27e199bf3d003e82b0028c40ef9c5687b0054d09b250e1b4"} Feb 23 19:25:24 crc kubenswrapper[4768]: I0223 19:25:24.725269 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qrdj" event={"ID":"9532555c-abb5-4c0d-b9c6-67b0c956c407","Type":"ContainerStarted","Data":"07606c36867c1e4cfd3a7e43c558ccde00a5c1005d285dd5d00b630c305408b3"} Feb 23 19:25:24 crc kubenswrapper[4768]: I0223 19:25:24.749735 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6qrdj" podStartSLOduration=3.318813656 podStartE2EDuration="5.7497179s" podCreationTimestamp="2026-02-23 19:25:19 +0000 UTC" firstStartedPulling="2026-02-23 19:25:21.694053594 +0000 UTC m=+3117.084539394" lastFinishedPulling="2026-02-23 19:25:24.124957838 +0000 UTC m=+3119.515443638" observedRunningTime="2026-02-23 19:25:24.7446011 +0000 UTC m=+3120.135086970" watchObservedRunningTime="2026-02-23 19:25:24.7497179 +0000 UTC m=+3120.140203700" Feb 23 19:25:26 crc kubenswrapper[4768]: I0223 19:25:26.307386 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:25:26 crc kubenswrapper[4768]: E0223 19:25:26.308995 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:25:30 crc kubenswrapper[4768]: I0223 19:25:30.105381 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:30 crc kubenswrapper[4768]: I0223 19:25:30.106016 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:30 crc kubenswrapper[4768]: I0223 19:25:30.185857 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:30 crc kubenswrapper[4768]: I0223 19:25:30.825722 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:32 crc kubenswrapper[4768]: I0223 19:25:32.649944 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qrdj"] Feb 23 19:25:32 crc kubenswrapper[4768]: I0223 19:25:32.801057 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6qrdj" podUID="9532555c-abb5-4c0d-b9c6-67b0c956c407" containerName="registry-server" containerID="cri-o://07606c36867c1e4cfd3a7e43c558ccde00a5c1005d285dd5d00b630c305408b3" gracePeriod=2 Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.348389 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.505721 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9532555c-abb5-4c0d-b9c6-67b0c956c407-catalog-content\") pod \"9532555c-abb5-4c0d-b9c6-67b0c956c407\" (UID: \"9532555c-abb5-4c0d-b9c6-67b0c956c407\") " Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.505766 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9532555c-abb5-4c0d-b9c6-67b0c956c407-utilities\") pod \"9532555c-abb5-4c0d-b9c6-67b0c956c407\" (UID: \"9532555c-abb5-4c0d-b9c6-67b0c956c407\") " Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.505843 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxm2k\" (UniqueName: \"kubernetes.io/projected/9532555c-abb5-4c0d-b9c6-67b0c956c407-kube-api-access-lxm2k\") pod \"9532555c-abb5-4c0d-b9c6-67b0c956c407\" (UID: \"9532555c-abb5-4c0d-b9c6-67b0c956c407\") " Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.507432 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9532555c-abb5-4c0d-b9c6-67b0c956c407-utilities" (OuterVolumeSpecName: "utilities") pod "9532555c-abb5-4c0d-b9c6-67b0c956c407" (UID: "9532555c-abb5-4c0d-b9c6-67b0c956c407"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.512814 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9532555c-abb5-4c0d-b9c6-67b0c956c407-kube-api-access-lxm2k" (OuterVolumeSpecName: "kube-api-access-lxm2k") pod "9532555c-abb5-4c0d-b9c6-67b0c956c407" (UID: "9532555c-abb5-4c0d-b9c6-67b0c956c407"). InnerVolumeSpecName "kube-api-access-lxm2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.569442 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9532555c-abb5-4c0d-b9c6-67b0c956c407-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9532555c-abb5-4c0d-b9c6-67b0c956c407" (UID: "9532555c-abb5-4c0d-b9c6-67b0c956c407"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.608956 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9532555c-abb5-4c0d-b9c6-67b0c956c407-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.608999 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9532555c-abb5-4c0d-b9c6-67b0c956c407-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.609016 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxm2k\" (UniqueName: \"kubernetes.io/projected/9532555c-abb5-4c0d-b9c6-67b0c956c407-kube-api-access-lxm2k\") on node \"crc\" DevicePath \"\"" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.815307 4768 generic.go:334] "Generic (PLEG): container finished" podID="9532555c-abb5-4c0d-b9c6-67b0c956c407" containerID="07606c36867c1e4cfd3a7e43c558ccde00a5c1005d285dd5d00b630c305408b3" exitCode=0 Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.815366 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qrdj" event={"ID":"9532555c-abb5-4c0d-b9c6-67b0c956c407","Type":"ContainerDied","Data":"07606c36867c1e4cfd3a7e43c558ccde00a5c1005d285dd5d00b630c305408b3"} Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.815403 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qrdj" event={"ID":"9532555c-abb5-4c0d-b9c6-67b0c956c407","Type":"ContainerDied","Data":"7ef2ba56a6ecd1f0ab34e9bcc640a2997a90b0e7db5cef33afb4f32d7396a051"} Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.815432 4768 scope.go:117] "RemoveContainer" containerID="07606c36867c1e4cfd3a7e43c558ccde00a5c1005d285dd5d00b630c305408b3" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.815592 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qrdj" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.842920 4768 scope.go:117] "RemoveContainer" containerID="6cd0cecf5ecdf8fd27e199bf3d003e82b0028c40ef9c5687b0054d09b250e1b4" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.865468 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qrdj"] Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.879134 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6qrdj"] Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.883239 4768 scope.go:117] "RemoveContainer" containerID="1e4d961631cfeb5f4551d3dd1264dbaed28320c60f0373aee13573471fd7a5ed" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.929925 4768 scope.go:117] "RemoveContainer" containerID="07606c36867c1e4cfd3a7e43c558ccde00a5c1005d285dd5d00b630c305408b3" Feb 23 19:25:33 crc kubenswrapper[4768]: E0223 19:25:33.930497 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07606c36867c1e4cfd3a7e43c558ccde00a5c1005d285dd5d00b630c305408b3\": container with ID starting with 07606c36867c1e4cfd3a7e43c558ccde00a5c1005d285dd5d00b630c305408b3 not found: ID does not exist" containerID="07606c36867c1e4cfd3a7e43c558ccde00a5c1005d285dd5d00b630c305408b3" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.930537 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07606c36867c1e4cfd3a7e43c558ccde00a5c1005d285dd5d00b630c305408b3"} err="failed to get container status \"07606c36867c1e4cfd3a7e43c558ccde00a5c1005d285dd5d00b630c305408b3\": rpc error: code = NotFound desc = could not find container \"07606c36867c1e4cfd3a7e43c558ccde00a5c1005d285dd5d00b630c305408b3\": container with ID starting with 07606c36867c1e4cfd3a7e43c558ccde00a5c1005d285dd5d00b630c305408b3 not found: ID does not exist" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.930564 4768 scope.go:117] "RemoveContainer" containerID="6cd0cecf5ecdf8fd27e199bf3d003e82b0028c40ef9c5687b0054d09b250e1b4" Feb 23 19:25:33 crc kubenswrapper[4768]: E0223 19:25:33.931171 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd0cecf5ecdf8fd27e199bf3d003e82b0028c40ef9c5687b0054d09b250e1b4\": container with ID starting with 6cd0cecf5ecdf8fd27e199bf3d003e82b0028c40ef9c5687b0054d09b250e1b4 not found: ID does not exist" containerID="6cd0cecf5ecdf8fd27e199bf3d003e82b0028c40ef9c5687b0054d09b250e1b4" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.931198 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd0cecf5ecdf8fd27e199bf3d003e82b0028c40ef9c5687b0054d09b250e1b4"} err="failed to get container status \"6cd0cecf5ecdf8fd27e199bf3d003e82b0028c40ef9c5687b0054d09b250e1b4\": rpc error: code = NotFound desc = could not find container \"6cd0cecf5ecdf8fd27e199bf3d003e82b0028c40ef9c5687b0054d09b250e1b4\": container with ID starting with 6cd0cecf5ecdf8fd27e199bf3d003e82b0028c40ef9c5687b0054d09b250e1b4 not found: ID does not exist" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.931220 4768 scope.go:117] "RemoveContainer" containerID="1e4d961631cfeb5f4551d3dd1264dbaed28320c60f0373aee13573471fd7a5ed" Feb 23 19:25:33 crc kubenswrapper[4768]: E0223 19:25:33.931863 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e4d961631cfeb5f4551d3dd1264dbaed28320c60f0373aee13573471fd7a5ed\": container with ID starting with 1e4d961631cfeb5f4551d3dd1264dbaed28320c60f0373aee13573471fd7a5ed not found: ID does not exist" containerID="1e4d961631cfeb5f4551d3dd1264dbaed28320c60f0373aee13573471fd7a5ed" Feb 23 19:25:33 crc kubenswrapper[4768]: I0223 19:25:33.931900 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e4d961631cfeb5f4551d3dd1264dbaed28320c60f0373aee13573471fd7a5ed"} err="failed to get container status \"1e4d961631cfeb5f4551d3dd1264dbaed28320c60f0373aee13573471fd7a5ed\": rpc error: code = NotFound desc = could not find container \"1e4d961631cfeb5f4551d3dd1264dbaed28320c60f0373aee13573471fd7a5ed\": container with ID starting with 1e4d961631cfeb5f4551d3dd1264dbaed28320c60f0373aee13573471fd7a5ed not found: ID does not exist" Feb 23 19:25:35 crc kubenswrapper[4768]: I0223 19:25:35.325680 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9532555c-abb5-4c0d-b9c6-67b0c956c407" path="/var/lib/kubelet/pods/9532555c-abb5-4c0d-b9c6-67b0c956c407/volumes" Feb 23 19:25:37 crc kubenswrapper[4768]: I0223 19:25:37.309443 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:25:37 crc kubenswrapper[4768]: E0223 19:25:37.310088 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:25:51 crc kubenswrapper[4768]: I0223 19:25:51.307574 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:25:51 crc kubenswrapper[4768]: E0223 19:25:51.309598 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:26:02 crc kubenswrapper[4768]: I0223 19:26:02.307384 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:26:02 crc kubenswrapper[4768]: E0223 19:26:02.308295 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:26:15 crc kubenswrapper[4768]: I0223 19:26:15.316033 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:26:15 crc kubenswrapper[4768]: E0223 19:26:15.317210 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:26:28 crc kubenswrapper[4768]: I0223 19:26:28.308521 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:26:28 crc kubenswrapper[4768]: E0223 19:26:28.309493 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:26:43 crc kubenswrapper[4768]: I0223 19:26:43.307760 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:26:43 crc kubenswrapper[4768]: E0223 19:26:43.309285 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:26:57 crc kubenswrapper[4768]: I0223 19:26:57.308234 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:26:57 crc kubenswrapper[4768]: E0223 19:26:57.309769 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:27:09 crc kubenswrapper[4768]: I0223 19:27:09.307715 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:27:09 crc kubenswrapper[4768]: E0223 19:27:09.310172 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:27:22 crc kubenswrapper[4768]: I0223 19:27:22.307853 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:27:22 crc kubenswrapper[4768]: E0223 19:27:22.310457 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:27:37 crc kubenswrapper[4768]: I0223 19:27:37.309092 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:27:37 crc kubenswrapper[4768]: E0223 19:27:37.310317 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:27:52 crc kubenswrapper[4768]: I0223 19:27:52.307858 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:27:53 crc kubenswrapper[4768]: I0223 19:27:53.294430 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"b646a8c09da3b1a57c2765094d5b5177101d4658c9a8134db1d254dc4300ce3b"} Feb 23 19:28:52 crc kubenswrapper[4768]: I0223 19:28:52.940913 4768 generic.go:334] "Generic (PLEG): container finished" podID="89c93f99-08a8-4231-8b96-d307d0525745" containerID="98ea015b77ebe053b1cf8d928aee83aceab81038f809c6731a82cdd160ea8388" exitCode=0 Feb 23 19:28:52 crc kubenswrapper[4768]: I0223 19:28:52.941661 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"89c93f99-08a8-4231-8b96-d307d0525745","Type":"ContainerDied","Data":"98ea015b77ebe053b1cf8d928aee83aceab81038f809c6731a82cdd160ea8388"} Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.437663 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.512357 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t528h\" (UniqueName: \"kubernetes.io/projected/89c93f99-08a8-4231-8b96-d307d0525745-kube-api-access-t528h\") pod \"89c93f99-08a8-4231-8b96-d307d0525745\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.512435 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89c93f99-08a8-4231-8b96-d307d0525745-config-data\") pod \"89c93f99-08a8-4231-8b96-d307d0525745\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.512552 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-ssh-key\") pod \"89c93f99-08a8-4231-8b96-d307d0525745\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.512615 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"89c93f99-08a8-4231-8b96-d307d0525745\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.512649 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/89c93f99-08a8-4231-8b96-d307d0525745-openstack-config\") pod \"89c93f99-08a8-4231-8b96-d307d0525745\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.512691 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-openstack-config-secret\") pod \"89c93f99-08a8-4231-8b96-d307d0525745\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.512745 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/89c93f99-08a8-4231-8b96-d307d0525745-test-operator-ephemeral-temporary\") pod \"89c93f99-08a8-4231-8b96-d307d0525745\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.512771 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-ca-certs\") pod \"89c93f99-08a8-4231-8b96-d307d0525745\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.512827 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/89c93f99-08a8-4231-8b96-d307d0525745-test-operator-ephemeral-workdir\") pod \"89c93f99-08a8-4231-8b96-d307d0525745\" (UID: \"89c93f99-08a8-4231-8b96-d307d0525745\") " Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.513414 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89c93f99-08a8-4231-8b96-d307d0525745-config-data" (OuterVolumeSpecName: "config-data") pod "89c93f99-08a8-4231-8b96-d307d0525745" (UID: "89c93f99-08a8-4231-8b96-d307d0525745"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.513859 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89c93f99-08a8-4231-8b96-d307d0525745-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "89c93f99-08a8-4231-8b96-d307d0525745" (UID: "89c93f99-08a8-4231-8b96-d307d0525745"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.516428 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89c93f99-08a8-4231-8b96-d307d0525745-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "89c93f99-08a8-4231-8b96-d307d0525745" (UID: "89c93f99-08a8-4231-8b96-d307d0525745"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.518617 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "89c93f99-08a8-4231-8b96-d307d0525745" (UID: "89c93f99-08a8-4231-8b96-d307d0525745"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.520521 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c93f99-08a8-4231-8b96-d307d0525745-kube-api-access-t528h" (OuterVolumeSpecName: "kube-api-access-t528h") pod "89c93f99-08a8-4231-8b96-d307d0525745" (UID: "89c93f99-08a8-4231-8b96-d307d0525745"). InnerVolumeSpecName "kube-api-access-t528h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.542744 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "89c93f99-08a8-4231-8b96-d307d0525745" (UID: "89c93f99-08a8-4231-8b96-d307d0525745"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.545214 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "89c93f99-08a8-4231-8b96-d307d0525745" (UID: "89c93f99-08a8-4231-8b96-d307d0525745"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.545312 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "89c93f99-08a8-4231-8b96-d307d0525745" (UID: "89c93f99-08a8-4231-8b96-d307d0525745"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.567405 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89c93f99-08a8-4231-8b96-d307d0525745-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "89c93f99-08a8-4231-8b96-d307d0525745" (UID: "89c93f99-08a8-4231-8b96-d307d0525745"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.614897 4768 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.614936 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/89c93f99-08a8-4231-8b96-d307d0525745-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.614952 4768 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.614964 4768 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/89c93f99-08a8-4231-8b96-d307d0525745-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.614975 4768 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.614986 4768 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/89c93f99-08a8-4231-8b96-d307d0525745-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.614997 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t528h\" (UniqueName: \"kubernetes.io/projected/89c93f99-08a8-4231-8b96-d307d0525745-kube-api-access-t528h\") on node \"crc\" DevicePath \"\"" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.615010 4768 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/89c93f99-08a8-4231-8b96-d307d0525745-config-data\") on node \"crc\" DevicePath \"\"" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.615021 4768 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/89c93f99-08a8-4231-8b96-d307d0525745-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.637331 4768 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.717213 4768 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.970071 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"89c93f99-08a8-4231-8b96-d307d0525745","Type":"ContainerDied","Data":"453e03a5147dbf933442eb3018522cd704da3f857dfcf38073398f46f81411ea"} Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.970650 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="453e03a5147dbf933442eb3018522cd704da3f857dfcf38073398f46f81411ea" Feb 23 19:28:54 crc kubenswrapper[4768]: I0223 19:28:54.970140 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.442414 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 19:29:00 crc kubenswrapper[4768]: E0223 19:29:00.443390 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9532555c-abb5-4c0d-b9c6-67b0c956c407" containerName="extract-utilities" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.443407 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9532555c-abb5-4c0d-b9c6-67b0c956c407" containerName="extract-utilities" Feb 23 19:29:00 crc kubenswrapper[4768]: E0223 19:29:00.443433 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9532555c-abb5-4c0d-b9c6-67b0c956c407" containerName="registry-server" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.443440 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9532555c-abb5-4c0d-b9c6-67b0c956c407" containerName="registry-server" Feb 23 19:29:00 crc kubenswrapper[4768]: E0223 19:29:00.443464 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c93f99-08a8-4231-8b96-d307d0525745" containerName="tempest-tests-tempest-tests-runner" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.443473 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c93f99-08a8-4231-8b96-d307d0525745" containerName="tempest-tests-tempest-tests-runner" Feb 23 19:29:00 crc kubenswrapper[4768]: E0223 19:29:00.443496 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9532555c-abb5-4c0d-b9c6-67b0c956c407" containerName="extract-content" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.443506 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9532555c-abb5-4c0d-b9c6-67b0c956c407" containerName="extract-content" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.443729 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c93f99-08a8-4231-8b96-d307d0525745" containerName="tempest-tests-tempest-tests-runner" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.443754 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9532555c-abb5-4c0d-b9c6-67b0c956c407" containerName="registry-server" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.444548 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.450871 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-7lcml" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.480750 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.553829 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br456\" (UniqueName: \"kubernetes.io/projected/0f9b3373-10b7-4e2c-8b9f-985eb74fb53d-kube-api-access-br456\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0f9b3373-10b7-4e2c-8b9f-985eb74fb53d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.554055 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0f9b3373-10b7-4e2c-8b9f-985eb74fb53d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.655981 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br456\" (UniqueName: \"kubernetes.io/projected/0f9b3373-10b7-4e2c-8b9f-985eb74fb53d-kube-api-access-br456\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0f9b3373-10b7-4e2c-8b9f-985eb74fb53d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.656097 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0f9b3373-10b7-4e2c-8b9f-985eb74fb53d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.656642 4768 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0f9b3373-10b7-4e2c-8b9f-985eb74fb53d\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.681161 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br456\" (UniqueName: \"kubernetes.io/projected/0f9b3373-10b7-4e2c-8b9f-985eb74fb53d-kube-api-access-br456\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0f9b3373-10b7-4e2c-8b9f-985eb74fb53d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.689662 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0f9b3373-10b7-4e2c-8b9f-985eb74fb53d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 19:29:00 crc kubenswrapper[4768]: I0223 19:29:00.782385 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 23 19:29:01 crc kubenswrapper[4768]: I0223 19:29:01.220070 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 23 19:29:01 crc kubenswrapper[4768]: I0223 19:29:01.226340 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 19:29:02 crc kubenswrapper[4768]: I0223 19:29:02.068276 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0f9b3373-10b7-4e2c-8b9f-985eb74fb53d","Type":"ContainerStarted","Data":"47a9aea18fe3ee6a8dc10bfac459f17762c70417acd655add9c4b645af096a10"} Feb 23 19:29:03 crc kubenswrapper[4768]: I0223 19:29:03.077225 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0f9b3373-10b7-4e2c-8b9f-985eb74fb53d","Type":"ContainerStarted","Data":"c6be61b195e7c5191a62433f46c072b2e94c8f9caec6179bcb1b8f6a2a657765"} Feb 23 19:29:03 crc kubenswrapper[4768]: I0223 19:29:03.103788 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.307986847 podStartE2EDuration="3.10376747s" podCreationTimestamp="2026-02-23 19:29:00 +0000 UTC" firstStartedPulling="2026-02-23 19:29:01.226108363 +0000 UTC m=+3336.616594163" lastFinishedPulling="2026-02-23 19:29:02.021888936 +0000 UTC m=+3337.412374786" observedRunningTime="2026-02-23 19:29:03.092396281 +0000 UTC m=+3338.482882121" watchObservedRunningTime="2026-02-23 19:29:03.10376747 +0000 UTC m=+3338.494253280" Feb 23 19:29:23 crc kubenswrapper[4768]: I0223 19:29:23.772965 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nfkwh/must-gather-275s8"] Feb 23 19:29:23 crc kubenswrapper[4768]: I0223 19:29:23.775179 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/must-gather-275s8" Feb 23 19:29:23 crc kubenswrapper[4768]: I0223 19:29:23.781100 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-nfkwh"/"default-dockercfg-zcm6t" Feb 23 19:29:23 crc kubenswrapper[4768]: I0223 19:29:23.781100 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nfkwh"/"openshift-service-ca.crt" Feb 23 19:29:23 crc kubenswrapper[4768]: I0223 19:29:23.781244 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-nfkwh"/"kube-root-ca.crt" Feb 23 19:29:23 crc kubenswrapper[4768]: I0223 19:29:23.798864 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nfkwh/must-gather-275s8"] Feb 23 19:29:23 crc kubenswrapper[4768]: I0223 19:29:23.805181 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrx8m\" (UniqueName: \"kubernetes.io/projected/af07cf12-afdf-443f-8ee6-b20f9eb92269-kube-api-access-mrx8m\") pod \"must-gather-275s8\" (UID: \"af07cf12-afdf-443f-8ee6-b20f9eb92269\") " pod="openshift-must-gather-nfkwh/must-gather-275s8" Feb 23 19:29:23 crc kubenswrapper[4768]: I0223 19:29:23.805272 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af07cf12-afdf-443f-8ee6-b20f9eb92269-must-gather-output\") pod \"must-gather-275s8\" (UID: \"af07cf12-afdf-443f-8ee6-b20f9eb92269\") " pod="openshift-must-gather-nfkwh/must-gather-275s8" Feb 23 19:29:23 crc kubenswrapper[4768]: I0223 19:29:23.907298 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrx8m\" (UniqueName: \"kubernetes.io/projected/af07cf12-afdf-443f-8ee6-b20f9eb92269-kube-api-access-mrx8m\") pod \"must-gather-275s8\" (UID: \"af07cf12-afdf-443f-8ee6-b20f9eb92269\") " pod="openshift-must-gather-nfkwh/must-gather-275s8" Feb 23 19:29:23 crc kubenswrapper[4768]: I0223 19:29:23.907586 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af07cf12-afdf-443f-8ee6-b20f9eb92269-must-gather-output\") pod \"must-gather-275s8\" (UID: \"af07cf12-afdf-443f-8ee6-b20f9eb92269\") " pod="openshift-must-gather-nfkwh/must-gather-275s8" Feb 23 19:29:23 crc kubenswrapper[4768]: I0223 19:29:23.908057 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af07cf12-afdf-443f-8ee6-b20f9eb92269-must-gather-output\") pod \"must-gather-275s8\" (UID: \"af07cf12-afdf-443f-8ee6-b20f9eb92269\") " pod="openshift-must-gather-nfkwh/must-gather-275s8" Feb 23 19:29:23 crc kubenswrapper[4768]: I0223 19:29:23.938894 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrx8m\" (UniqueName: \"kubernetes.io/projected/af07cf12-afdf-443f-8ee6-b20f9eb92269-kube-api-access-mrx8m\") pod \"must-gather-275s8\" (UID: \"af07cf12-afdf-443f-8ee6-b20f9eb92269\") " pod="openshift-must-gather-nfkwh/must-gather-275s8" Feb 23 19:29:24 crc kubenswrapper[4768]: I0223 19:29:24.094278 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/must-gather-275s8" Feb 23 19:29:24 crc kubenswrapper[4768]: I0223 19:29:24.627985 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-nfkwh/must-gather-275s8"] Feb 23 19:29:25 crc kubenswrapper[4768]: I0223 19:29:25.362227 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nfkwh/must-gather-275s8" event={"ID":"af07cf12-afdf-443f-8ee6-b20f9eb92269","Type":"ContainerStarted","Data":"88384b787992fcc08cabe2e1ecb1025933fb585551e985f86e69fbdb47d55fce"} Feb 23 19:29:31 crc kubenswrapper[4768]: I0223 19:29:31.420975 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nfkwh/must-gather-275s8" event={"ID":"af07cf12-afdf-443f-8ee6-b20f9eb92269","Type":"ContainerStarted","Data":"76877552bbb496753c90bc7da0d58c939a598c1edd9004d49b17380069c5f822"} Feb 23 19:29:31 crc kubenswrapper[4768]: I0223 19:29:31.421590 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nfkwh/must-gather-275s8" event={"ID":"af07cf12-afdf-443f-8ee6-b20f9eb92269","Type":"ContainerStarted","Data":"ab9e7de2e951b2d1d1b306d79de04aba4719882a57bc28f924fc8c35ec254a42"} Feb 23 19:29:31 crc kubenswrapper[4768]: I0223 19:29:31.444618 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nfkwh/must-gather-275s8" podStartSLOduration=2.654455144 podStartE2EDuration="8.444589729s" podCreationTimestamp="2026-02-23 19:29:23 +0000 UTC" firstStartedPulling="2026-02-23 19:29:24.632545011 +0000 UTC m=+3360.023030811" lastFinishedPulling="2026-02-23 19:29:30.422679586 +0000 UTC m=+3365.813165396" observedRunningTime="2026-02-23 19:29:31.435950524 +0000 UTC m=+3366.826436334" watchObservedRunningTime="2026-02-23 19:29:31.444589729 +0000 UTC m=+3366.835075539" Feb 23 19:29:33 crc kubenswrapper[4768]: E0223 19:29:33.855611 4768 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.115:58936->38.102.83.115:37299: write tcp 38.102.83.115:58936->38.102.83.115:37299: write: broken pipe Feb 23 19:29:34 crc kubenswrapper[4768]: I0223 19:29:34.364360 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nfkwh/crc-debug-8gk2w"] Feb 23 19:29:34 crc kubenswrapper[4768]: I0223 19:29:34.365847 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" Feb 23 19:29:34 crc kubenswrapper[4768]: I0223 19:29:34.437603 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4hvv\" (UniqueName: \"kubernetes.io/projected/cb562be8-e64d-482f-8154-e29284da7871-kube-api-access-g4hvv\") pod \"crc-debug-8gk2w\" (UID: \"cb562be8-e64d-482f-8154-e29284da7871\") " pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" Feb 23 19:29:34 crc kubenswrapper[4768]: I0223 19:29:34.437796 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb562be8-e64d-482f-8154-e29284da7871-host\") pod \"crc-debug-8gk2w\" (UID: \"cb562be8-e64d-482f-8154-e29284da7871\") " pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" Feb 23 19:29:34 crc kubenswrapper[4768]: I0223 19:29:34.539384 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4hvv\" (UniqueName: \"kubernetes.io/projected/cb562be8-e64d-482f-8154-e29284da7871-kube-api-access-g4hvv\") pod \"crc-debug-8gk2w\" (UID: \"cb562be8-e64d-482f-8154-e29284da7871\") " pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" Feb 23 19:29:34 crc kubenswrapper[4768]: I0223 19:29:34.539527 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb562be8-e64d-482f-8154-e29284da7871-host\") pod \"crc-debug-8gk2w\" (UID: \"cb562be8-e64d-482f-8154-e29284da7871\") " pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" Feb 23 19:29:34 crc kubenswrapper[4768]: I0223 19:29:34.539639 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb562be8-e64d-482f-8154-e29284da7871-host\") pod \"crc-debug-8gk2w\" (UID: \"cb562be8-e64d-482f-8154-e29284da7871\") " pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" Feb 23 19:29:34 crc kubenswrapper[4768]: I0223 19:29:34.557381 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4hvv\" (UniqueName: \"kubernetes.io/projected/cb562be8-e64d-482f-8154-e29284da7871-kube-api-access-g4hvv\") pod \"crc-debug-8gk2w\" (UID: \"cb562be8-e64d-482f-8154-e29284da7871\") " pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" Feb 23 19:29:34 crc kubenswrapper[4768]: I0223 19:29:34.683136 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" Feb 23 19:29:35 crc kubenswrapper[4768]: I0223 19:29:35.470490 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" event={"ID":"cb562be8-e64d-482f-8154-e29284da7871","Type":"ContainerStarted","Data":"35280afbc31e13586fe08c2c0ecc09d1a682c0830f1feaaf3b87efe7d780a288"} Feb 23 19:29:46 crc kubenswrapper[4768]: I0223 19:29:46.562859 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" event={"ID":"cb562be8-e64d-482f-8154-e29284da7871","Type":"ContainerStarted","Data":"600f72ccae170763cb4e11674e0fa9d7c150ee3b4326a170137786aefd30586b"} Feb 23 19:29:46 crc kubenswrapper[4768]: I0223 19:29:46.586060 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" podStartSLOduration=1.603830598 podStartE2EDuration="12.586039342s" podCreationTimestamp="2026-02-23 19:29:34 +0000 UTC" firstStartedPulling="2026-02-23 19:29:34.721330477 +0000 UTC m=+3370.111816277" lastFinishedPulling="2026-02-23 19:29:45.703539201 +0000 UTC m=+3381.094025021" observedRunningTime="2026-02-23 19:29:46.578115507 +0000 UTC m=+3381.968601307" watchObservedRunningTime="2026-02-23 19:29:46.586039342 +0000 UTC m=+3381.976525142" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.159287 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q"] Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.161463 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.164461 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.164733 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.170115 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q"] Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.175608 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2978da34-86fd-464c-963e-4af4a8bf3112-secret-volume\") pod \"collect-profiles-29531250-s6g7q\" (UID: \"2978da34-86fd-464c-963e-4af4a8bf3112\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.175695 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m767\" (UniqueName: \"kubernetes.io/projected/2978da34-86fd-464c-963e-4af4a8bf3112-kube-api-access-2m767\") pod \"collect-profiles-29531250-s6g7q\" (UID: \"2978da34-86fd-464c-963e-4af4a8bf3112\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.175914 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2978da34-86fd-464c-963e-4af4a8bf3112-config-volume\") pod \"collect-profiles-29531250-s6g7q\" (UID: \"2978da34-86fd-464c-963e-4af4a8bf3112\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.278169 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m767\" (UniqueName: \"kubernetes.io/projected/2978da34-86fd-464c-963e-4af4a8bf3112-kube-api-access-2m767\") pod \"collect-profiles-29531250-s6g7q\" (UID: \"2978da34-86fd-464c-963e-4af4a8bf3112\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.278746 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2978da34-86fd-464c-963e-4af4a8bf3112-config-volume\") pod \"collect-profiles-29531250-s6g7q\" (UID: \"2978da34-86fd-464c-963e-4af4a8bf3112\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.278860 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2978da34-86fd-464c-963e-4af4a8bf3112-secret-volume\") pod \"collect-profiles-29531250-s6g7q\" (UID: \"2978da34-86fd-464c-963e-4af4a8bf3112\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.279641 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2978da34-86fd-464c-963e-4af4a8bf3112-config-volume\") pod \"collect-profiles-29531250-s6g7q\" (UID: \"2978da34-86fd-464c-963e-4af4a8bf3112\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.284791 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2978da34-86fd-464c-963e-4af4a8bf3112-secret-volume\") pod \"collect-profiles-29531250-s6g7q\" (UID: \"2978da34-86fd-464c-963e-4af4a8bf3112\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.295914 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m767\" (UniqueName: \"kubernetes.io/projected/2978da34-86fd-464c-963e-4af4a8bf3112-kube-api-access-2m767\") pod \"collect-profiles-29531250-s6g7q\" (UID: \"2978da34-86fd-464c-963e-4af4a8bf3112\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.488350 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" Feb 23 19:30:00 crc kubenswrapper[4768]: I0223 19:30:00.957997 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q"] Feb 23 19:30:00 crc kubenswrapper[4768]: W0223 19:30:00.967712 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2978da34_86fd_464c_963e_4af4a8bf3112.slice/crio-0335a892d970ac7a70258b863c7cd9728426e7341bf052c5af3e25b8a9c01332 WatchSource:0}: Error finding container 0335a892d970ac7a70258b863c7cd9728426e7341bf052c5af3e25b8a9c01332: Status 404 returned error can't find the container with id 0335a892d970ac7a70258b863c7cd9728426e7341bf052c5af3e25b8a9c01332 Feb 23 19:30:01 crc kubenswrapper[4768]: I0223 19:30:01.711814 4768 generic.go:334] "Generic (PLEG): container finished" podID="2978da34-86fd-464c-963e-4af4a8bf3112" containerID="144f2589c7e27a64673b923e373aaec967df6bcc409dcf4c3ccaf3640d46b031" exitCode=0 Feb 23 19:30:01 crc kubenswrapper[4768]: I0223 19:30:01.712002 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" event={"ID":"2978da34-86fd-464c-963e-4af4a8bf3112","Type":"ContainerDied","Data":"144f2589c7e27a64673b923e373aaec967df6bcc409dcf4c3ccaf3640d46b031"} Feb 23 19:30:01 crc kubenswrapper[4768]: I0223 19:30:01.712157 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" event={"ID":"2978da34-86fd-464c-963e-4af4a8bf3112","Type":"ContainerStarted","Data":"0335a892d970ac7a70258b863c7cd9728426e7341bf052c5af3e25b8a9c01332"} Feb 23 19:30:03 crc kubenswrapper[4768]: I0223 19:30:03.147743 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" Feb 23 19:30:03 crc kubenswrapper[4768]: I0223 19:30:03.230713 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m767\" (UniqueName: \"kubernetes.io/projected/2978da34-86fd-464c-963e-4af4a8bf3112-kube-api-access-2m767\") pod \"2978da34-86fd-464c-963e-4af4a8bf3112\" (UID: \"2978da34-86fd-464c-963e-4af4a8bf3112\") " Feb 23 19:30:03 crc kubenswrapper[4768]: I0223 19:30:03.230778 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2978da34-86fd-464c-963e-4af4a8bf3112-config-volume\") pod \"2978da34-86fd-464c-963e-4af4a8bf3112\" (UID: \"2978da34-86fd-464c-963e-4af4a8bf3112\") " Feb 23 19:30:03 crc kubenswrapper[4768]: I0223 19:30:03.230820 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2978da34-86fd-464c-963e-4af4a8bf3112-secret-volume\") pod \"2978da34-86fd-464c-963e-4af4a8bf3112\" (UID: \"2978da34-86fd-464c-963e-4af4a8bf3112\") " Feb 23 19:30:03 crc kubenswrapper[4768]: I0223 19:30:03.231830 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2978da34-86fd-464c-963e-4af4a8bf3112-config-volume" (OuterVolumeSpecName: "config-volume") pod "2978da34-86fd-464c-963e-4af4a8bf3112" (UID: "2978da34-86fd-464c-963e-4af4a8bf3112"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 19:30:03 crc kubenswrapper[4768]: I0223 19:30:03.236683 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2978da34-86fd-464c-963e-4af4a8bf3112-kube-api-access-2m767" (OuterVolumeSpecName: "kube-api-access-2m767") pod "2978da34-86fd-464c-963e-4af4a8bf3112" (UID: "2978da34-86fd-464c-963e-4af4a8bf3112"). InnerVolumeSpecName "kube-api-access-2m767". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:30:03 crc kubenswrapper[4768]: I0223 19:30:03.250950 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2978da34-86fd-464c-963e-4af4a8bf3112-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2978da34-86fd-464c-963e-4af4a8bf3112" (UID: "2978da34-86fd-464c-963e-4af4a8bf3112"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:30:03 crc kubenswrapper[4768]: I0223 19:30:03.332858 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2978da34-86fd-464c-963e-4af4a8bf3112-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 19:30:03 crc kubenswrapper[4768]: I0223 19:30:03.332900 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2978da34-86fd-464c-963e-4af4a8bf3112-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 19:30:03 crc kubenswrapper[4768]: I0223 19:30:03.332910 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2m767\" (UniqueName: \"kubernetes.io/projected/2978da34-86fd-464c-963e-4af4a8bf3112-kube-api-access-2m767\") on node \"crc\" DevicePath \"\"" Feb 23 19:30:03 crc kubenswrapper[4768]: I0223 19:30:03.731318 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" event={"ID":"2978da34-86fd-464c-963e-4af4a8bf3112","Type":"ContainerDied","Data":"0335a892d970ac7a70258b863c7cd9728426e7341bf052c5af3e25b8a9c01332"} Feb 23 19:30:03 crc kubenswrapper[4768]: I0223 19:30:03.731363 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0335a892d970ac7a70258b863c7cd9728426e7341bf052c5af3e25b8a9c01332" Feb 23 19:30:03 crc kubenswrapper[4768]: I0223 19:30:03.731441 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531250-s6g7q" Feb 23 19:30:04 crc kubenswrapper[4768]: I0223 19:30:04.221466 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp"] Feb 23 19:30:04 crc kubenswrapper[4768]: I0223 19:30:04.231024 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531205-w5jjp"] Feb 23 19:30:05 crc kubenswrapper[4768]: I0223 19:30:05.317644 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b830f829-652e-448e-9a7b-ec0c1d91cee9" path="/var/lib/kubelet/pods/b830f829-652e-448e-9a7b-ec0c1d91cee9/volumes" Feb 23 19:30:09 crc kubenswrapper[4768]: I0223 19:30:09.545687 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:30:09 crc kubenswrapper[4768]: I0223 19:30:09.546270 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:30:22 crc kubenswrapper[4768]: I0223 19:30:22.267780 4768 generic.go:334] "Generic (PLEG): container finished" podID="cb562be8-e64d-482f-8154-e29284da7871" containerID="600f72ccae170763cb4e11674e0fa9d7c150ee3b4326a170137786aefd30586b" exitCode=0 Feb 23 19:30:22 crc kubenswrapper[4768]: I0223 19:30:22.267875 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" event={"ID":"cb562be8-e64d-482f-8154-e29284da7871","Type":"ContainerDied","Data":"600f72ccae170763cb4e11674e0fa9d7c150ee3b4326a170137786aefd30586b"} Feb 23 19:30:23 crc kubenswrapper[4768]: I0223 19:30:23.419135 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" Feb 23 19:30:23 crc kubenswrapper[4768]: I0223 19:30:23.489524 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nfkwh/crc-debug-8gk2w"] Feb 23 19:30:23 crc kubenswrapper[4768]: I0223 19:30:23.514001 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nfkwh/crc-debug-8gk2w"] Feb 23 19:30:23 crc kubenswrapper[4768]: I0223 19:30:23.531569 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4hvv\" (UniqueName: \"kubernetes.io/projected/cb562be8-e64d-482f-8154-e29284da7871-kube-api-access-g4hvv\") pod \"cb562be8-e64d-482f-8154-e29284da7871\" (UID: \"cb562be8-e64d-482f-8154-e29284da7871\") " Feb 23 19:30:23 crc kubenswrapper[4768]: I0223 19:30:23.531788 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb562be8-e64d-482f-8154-e29284da7871-host\") pod \"cb562be8-e64d-482f-8154-e29284da7871\" (UID: \"cb562be8-e64d-482f-8154-e29284da7871\") " Feb 23 19:30:23 crc kubenswrapper[4768]: I0223 19:30:23.532471 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb562be8-e64d-482f-8154-e29284da7871-host" (OuterVolumeSpecName: "host") pod "cb562be8-e64d-482f-8154-e29284da7871" (UID: "cb562be8-e64d-482f-8154-e29284da7871"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 19:30:23 crc kubenswrapper[4768]: I0223 19:30:23.552300 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb562be8-e64d-482f-8154-e29284da7871-kube-api-access-g4hvv" (OuterVolumeSpecName: "kube-api-access-g4hvv") pod "cb562be8-e64d-482f-8154-e29284da7871" (UID: "cb562be8-e64d-482f-8154-e29284da7871"). InnerVolumeSpecName "kube-api-access-g4hvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:30:23 crc kubenswrapper[4768]: I0223 19:30:23.634950 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4hvv\" (UniqueName: \"kubernetes.io/projected/cb562be8-e64d-482f-8154-e29284da7871-kube-api-access-g4hvv\") on node \"crc\" DevicePath \"\"" Feb 23 19:30:23 crc kubenswrapper[4768]: I0223 19:30:23.634989 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb562be8-e64d-482f-8154-e29284da7871-host\") on node \"crc\" DevicePath \"\"" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.288285 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35280afbc31e13586fe08c2c0ecc09d1a682c0830f1feaaf3b87efe7d780a288" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.288382 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/crc-debug-8gk2w" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.719904 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nfkwh/crc-debug-7dbjz"] Feb 23 19:30:24 crc kubenswrapper[4768]: E0223 19:30:24.721571 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb562be8-e64d-482f-8154-e29284da7871" containerName="container-00" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.721589 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb562be8-e64d-482f-8154-e29284da7871" containerName="container-00" Feb 23 19:30:24 crc kubenswrapper[4768]: E0223 19:30:24.721600 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2978da34-86fd-464c-963e-4af4a8bf3112" containerName="collect-profiles" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.721606 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="2978da34-86fd-464c-963e-4af4a8bf3112" containerName="collect-profiles" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.721781 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="2978da34-86fd-464c-963e-4af4a8bf3112" containerName="collect-profiles" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.721794 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb562be8-e64d-482f-8154-e29284da7871" containerName="container-00" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.722387 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/crc-debug-7dbjz" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.865905 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6mxd\" (UniqueName: \"kubernetes.io/projected/cd2030a3-748b-4e0d-a56b-9e6d6323f77e-kube-api-access-d6mxd\") pod \"crc-debug-7dbjz\" (UID: \"cd2030a3-748b-4e0d-a56b-9e6d6323f77e\") " pod="openshift-must-gather-nfkwh/crc-debug-7dbjz" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.865963 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cd2030a3-748b-4e0d-a56b-9e6d6323f77e-host\") pod \"crc-debug-7dbjz\" (UID: \"cd2030a3-748b-4e0d-a56b-9e6d6323f77e\") " pod="openshift-must-gather-nfkwh/crc-debug-7dbjz" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.967845 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6mxd\" (UniqueName: \"kubernetes.io/projected/cd2030a3-748b-4e0d-a56b-9e6d6323f77e-kube-api-access-d6mxd\") pod \"crc-debug-7dbjz\" (UID: \"cd2030a3-748b-4e0d-a56b-9e6d6323f77e\") " pod="openshift-must-gather-nfkwh/crc-debug-7dbjz" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.967905 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cd2030a3-748b-4e0d-a56b-9e6d6323f77e-host\") pod \"crc-debug-7dbjz\" (UID: \"cd2030a3-748b-4e0d-a56b-9e6d6323f77e\") " pod="openshift-must-gather-nfkwh/crc-debug-7dbjz" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.968002 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cd2030a3-748b-4e0d-a56b-9e6d6323f77e-host\") pod \"crc-debug-7dbjz\" (UID: \"cd2030a3-748b-4e0d-a56b-9e6d6323f77e\") " pod="openshift-must-gather-nfkwh/crc-debug-7dbjz" Feb 23 19:30:24 crc kubenswrapper[4768]: I0223 19:30:24.992738 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6mxd\" (UniqueName: \"kubernetes.io/projected/cd2030a3-748b-4e0d-a56b-9e6d6323f77e-kube-api-access-d6mxd\") pod \"crc-debug-7dbjz\" (UID: \"cd2030a3-748b-4e0d-a56b-9e6d6323f77e\") " pod="openshift-must-gather-nfkwh/crc-debug-7dbjz" Feb 23 19:30:25 crc kubenswrapper[4768]: I0223 19:30:25.045612 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/crc-debug-7dbjz" Feb 23 19:30:25 crc kubenswrapper[4768]: I0223 19:30:25.302859 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nfkwh/crc-debug-7dbjz" event={"ID":"cd2030a3-748b-4e0d-a56b-9e6d6323f77e","Type":"ContainerStarted","Data":"f8164d46ddf141e93bfcdef21d8721771ff10e89367f607273f2e84790587544"} Feb 23 19:30:25 crc kubenswrapper[4768]: I0223 19:30:25.336866 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb562be8-e64d-482f-8154-e29284da7871" path="/var/lib/kubelet/pods/cb562be8-e64d-482f-8154-e29284da7871/volumes" Feb 23 19:30:26 crc kubenswrapper[4768]: I0223 19:30:26.317039 4768 generic.go:334] "Generic (PLEG): container finished" podID="cd2030a3-748b-4e0d-a56b-9e6d6323f77e" containerID="02ac3ad20ec8f273a58f417b3de936103de329ccc58fc0a6cb565c6481f2c0b8" exitCode=0 Feb 23 19:30:26 crc kubenswrapper[4768]: I0223 19:30:26.317120 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nfkwh/crc-debug-7dbjz" event={"ID":"cd2030a3-748b-4e0d-a56b-9e6d6323f77e","Type":"ContainerDied","Data":"02ac3ad20ec8f273a58f417b3de936103de329ccc58fc0a6cb565c6481f2c0b8"} Feb 23 19:30:26 crc kubenswrapper[4768]: I0223 19:30:26.880490 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nfkwh/crc-debug-7dbjz"] Feb 23 19:30:26 crc kubenswrapper[4768]: I0223 19:30:26.888266 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nfkwh/crc-debug-7dbjz"] Feb 23 19:30:27 crc kubenswrapper[4768]: I0223 19:30:27.438286 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/crc-debug-7dbjz" Feb 23 19:30:27 crc kubenswrapper[4768]: I0223 19:30:27.518762 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cd2030a3-748b-4e0d-a56b-9e6d6323f77e-host\") pod \"cd2030a3-748b-4e0d-a56b-9e6d6323f77e\" (UID: \"cd2030a3-748b-4e0d-a56b-9e6d6323f77e\") " Feb 23 19:30:27 crc kubenswrapper[4768]: I0223 19:30:27.518970 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd2030a3-748b-4e0d-a56b-9e6d6323f77e-host" (OuterVolumeSpecName: "host") pod "cd2030a3-748b-4e0d-a56b-9e6d6323f77e" (UID: "cd2030a3-748b-4e0d-a56b-9e6d6323f77e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 19:30:27 crc kubenswrapper[4768]: I0223 19:30:27.518994 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6mxd\" (UniqueName: \"kubernetes.io/projected/cd2030a3-748b-4e0d-a56b-9e6d6323f77e-kube-api-access-d6mxd\") pod \"cd2030a3-748b-4e0d-a56b-9e6d6323f77e\" (UID: \"cd2030a3-748b-4e0d-a56b-9e6d6323f77e\") " Feb 23 19:30:27 crc kubenswrapper[4768]: I0223 19:30:27.519457 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cd2030a3-748b-4e0d-a56b-9e6d6323f77e-host\") on node \"crc\" DevicePath \"\"" Feb 23 19:30:27 crc kubenswrapper[4768]: I0223 19:30:27.525486 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd2030a3-748b-4e0d-a56b-9e6d6323f77e-kube-api-access-d6mxd" (OuterVolumeSpecName: "kube-api-access-d6mxd") pod "cd2030a3-748b-4e0d-a56b-9e6d6323f77e" (UID: "cd2030a3-748b-4e0d-a56b-9e6d6323f77e"). InnerVolumeSpecName "kube-api-access-d6mxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:30:27 crc kubenswrapper[4768]: I0223 19:30:27.621588 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6mxd\" (UniqueName: \"kubernetes.io/projected/cd2030a3-748b-4e0d-a56b-9e6d6323f77e-kube-api-access-d6mxd\") on node \"crc\" DevicePath \"\"" Feb 23 19:30:28 crc kubenswrapper[4768]: I0223 19:30:28.060661 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-nfkwh/crc-debug-v888z"] Feb 23 19:30:28 crc kubenswrapper[4768]: E0223 19:30:28.061633 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd2030a3-748b-4e0d-a56b-9e6d6323f77e" containerName="container-00" Feb 23 19:30:28 crc kubenswrapper[4768]: I0223 19:30:28.061659 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd2030a3-748b-4e0d-a56b-9e6d6323f77e" containerName="container-00" Feb 23 19:30:28 crc kubenswrapper[4768]: I0223 19:30:28.061910 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd2030a3-748b-4e0d-a56b-9e6d6323f77e" containerName="container-00" Feb 23 19:30:28 crc kubenswrapper[4768]: I0223 19:30:28.062773 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/crc-debug-v888z" Feb 23 19:30:28 crc kubenswrapper[4768]: I0223 19:30:28.130108 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae60615f-1836-4352-a014-033f988df75e-host\") pod \"crc-debug-v888z\" (UID: \"ae60615f-1836-4352-a014-033f988df75e\") " pod="openshift-must-gather-nfkwh/crc-debug-v888z" Feb 23 19:30:28 crc kubenswrapper[4768]: I0223 19:30:28.130382 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc8dj\" (UniqueName: \"kubernetes.io/projected/ae60615f-1836-4352-a014-033f988df75e-kube-api-access-xc8dj\") pod \"crc-debug-v888z\" (UID: \"ae60615f-1836-4352-a014-033f988df75e\") " pod="openshift-must-gather-nfkwh/crc-debug-v888z" Feb 23 19:30:28 crc kubenswrapper[4768]: I0223 19:30:28.232596 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae60615f-1836-4352-a014-033f988df75e-host\") pod \"crc-debug-v888z\" (UID: \"ae60615f-1836-4352-a014-033f988df75e\") " pod="openshift-must-gather-nfkwh/crc-debug-v888z" Feb 23 19:30:28 crc kubenswrapper[4768]: I0223 19:30:28.232770 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc8dj\" (UniqueName: \"kubernetes.io/projected/ae60615f-1836-4352-a014-033f988df75e-kube-api-access-xc8dj\") pod \"crc-debug-v888z\" (UID: \"ae60615f-1836-4352-a014-033f988df75e\") " pod="openshift-must-gather-nfkwh/crc-debug-v888z" Feb 23 19:30:28 crc kubenswrapper[4768]: I0223 19:30:28.232911 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae60615f-1836-4352-a014-033f988df75e-host\") pod \"crc-debug-v888z\" (UID: \"ae60615f-1836-4352-a014-033f988df75e\") " pod="openshift-must-gather-nfkwh/crc-debug-v888z" Feb 23 19:30:28 crc kubenswrapper[4768]: I0223 19:30:28.248560 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc8dj\" (UniqueName: \"kubernetes.io/projected/ae60615f-1836-4352-a014-033f988df75e-kube-api-access-xc8dj\") pod \"crc-debug-v888z\" (UID: \"ae60615f-1836-4352-a014-033f988df75e\") " pod="openshift-must-gather-nfkwh/crc-debug-v888z" Feb 23 19:30:28 crc kubenswrapper[4768]: I0223 19:30:28.341228 4768 scope.go:117] "RemoveContainer" containerID="02ac3ad20ec8f273a58f417b3de936103de329ccc58fc0a6cb565c6481f2c0b8" Feb 23 19:30:28 crc kubenswrapper[4768]: I0223 19:30:28.341349 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/crc-debug-7dbjz" Feb 23 19:30:28 crc kubenswrapper[4768]: I0223 19:30:28.382086 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/crc-debug-v888z" Feb 23 19:30:28 crc kubenswrapper[4768]: W0223 19:30:28.404613 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae60615f_1836_4352_a014_033f988df75e.slice/crio-063ce595c45f14f52a67952a91b658688249599973e39aeee3a6c6ff8b134446 WatchSource:0}: Error finding container 063ce595c45f14f52a67952a91b658688249599973e39aeee3a6c6ff8b134446: Status 404 returned error can't find the container with id 063ce595c45f14f52a67952a91b658688249599973e39aeee3a6c6ff8b134446 Feb 23 19:30:29 crc kubenswrapper[4768]: I0223 19:30:29.318410 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd2030a3-748b-4e0d-a56b-9e6d6323f77e" path="/var/lib/kubelet/pods/cd2030a3-748b-4e0d-a56b-9e6d6323f77e/volumes" Feb 23 19:30:29 crc kubenswrapper[4768]: I0223 19:30:29.353586 4768 generic.go:334] "Generic (PLEG): container finished" podID="ae60615f-1836-4352-a014-033f988df75e" containerID="e6b779ccf19a734f996f3833d86576baf98be35f24cdee7f54415cceeb1f140d" exitCode=0 Feb 23 19:30:29 crc kubenswrapper[4768]: I0223 19:30:29.353643 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nfkwh/crc-debug-v888z" event={"ID":"ae60615f-1836-4352-a014-033f988df75e","Type":"ContainerDied","Data":"e6b779ccf19a734f996f3833d86576baf98be35f24cdee7f54415cceeb1f140d"} Feb 23 19:30:29 crc kubenswrapper[4768]: I0223 19:30:29.353692 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nfkwh/crc-debug-v888z" event={"ID":"ae60615f-1836-4352-a014-033f988df75e","Type":"ContainerStarted","Data":"063ce595c45f14f52a67952a91b658688249599973e39aeee3a6c6ff8b134446"} Feb 23 19:30:29 crc kubenswrapper[4768]: I0223 19:30:29.427201 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nfkwh/crc-debug-v888z"] Feb 23 19:30:29 crc kubenswrapper[4768]: I0223 19:30:29.435973 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nfkwh/crc-debug-v888z"] Feb 23 19:30:30 crc kubenswrapper[4768]: I0223 19:30:30.463729 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/crc-debug-v888z" Feb 23 19:30:30 crc kubenswrapper[4768]: I0223 19:30:30.583972 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae60615f-1836-4352-a014-033f988df75e-host" (OuterVolumeSpecName: "host") pod "ae60615f-1836-4352-a014-033f988df75e" (UID: "ae60615f-1836-4352-a014-033f988df75e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 19:30:30 crc kubenswrapper[4768]: I0223 19:30:30.583994 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae60615f-1836-4352-a014-033f988df75e-host\") pod \"ae60615f-1836-4352-a014-033f988df75e\" (UID: \"ae60615f-1836-4352-a014-033f988df75e\") " Feb 23 19:30:30 crc kubenswrapper[4768]: I0223 19:30:30.584210 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc8dj\" (UniqueName: \"kubernetes.io/projected/ae60615f-1836-4352-a014-033f988df75e-kube-api-access-xc8dj\") pod \"ae60615f-1836-4352-a014-033f988df75e\" (UID: \"ae60615f-1836-4352-a014-033f988df75e\") " Feb 23 19:30:30 crc kubenswrapper[4768]: I0223 19:30:30.584750 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae60615f-1836-4352-a014-033f988df75e-host\") on node \"crc\" DevicePath \"\"" Feb 23 19:30:30 crc kubenswrapper[4768]: I0223 19:30:30.593141 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae60615f-1836-4352-a014-033f988df75e-kube-api-access-xc8dj" (OuterVolumeSpecName: "kube-api-access-xc8dj") pod "ae60615f-1836-4352-a014-033f988df75e" (UID: "ae60615f-1836-4352-a014-033f988df75e"). InnerVolumeSpecName "kube-api-access-xc8dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:30:30 crc kubenswrapper[4768]: I0223 19:30:30.686425 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc8dj\" (UniqueName: \"kubernetes.io/projected/ae60615f-1836-4352-a014-033f988df75e-kube-api-access-xc8dj\") on node \"crc\" DevicePath \"\"" Feb 23 19:30:31 crc kubenswrapper[4768]: I0223 19:30:31.343298 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae60615f-1836-4352-a014-033f988df75e" path="/var/lib/kubelet/pods/ae60615f-1836-4352-a014-033f988df75e/volumes" Feb 23 19:30:31 crc kubenswrapper[4768]: I0223 19:30:31.387473 4768 scope.go:117] "RemoveContainer" containerID="e6b779ccf19a734f996f3833d86576baf98be35f24cdee7f54415cceeb1f140d" Feb 23 19:30:31 crc kubenswrapper[4768]: I0223 19:30:31.387867 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/crc-debug-v888z" Feb 23 19:30:39 crc kubenswrapper[4768]: I0223 19:30:39.545270 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:30:39 crc kubenswrapper[4768]: I0223 19:30:39.545898 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:30:41 crc kubenswrapper[4768]: I0223 19:30:41.792152 4768 scope.go:117] "RemoveContainer" containerID="99c2fab5191623685bdc0925142b73d07eef1849ce06d1ac85bab4b40e542e44" Feb 23 19:30:50 crc kubenswrapper[4768]: I0223 19:30:50.533819 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7bc688ffdb-gftft_2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44/barbican-api/0.log" Feb 23 19:30:50 crc kubenswrapper[4768]: I0223 19:30:50.684511 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7bc688ffdb-gftft_2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44/barbican-api-log/0.log" Feb 23 19:30:50 crc kubenswrapper[4768]: I0223 19:30:50.739335 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5df7bc8868-6w74x_93487b6e-adae-4467-bc6f-022380ad3028/barbican-keystone-listener/0.log" Feb 23 19:30:50 crc kubenswrapper[4768]: I0223 19:30:50.815902 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5df7bc8868-6w74x_93487b6e-adae-4467-bc6f-022380ad3028/barbican-keystone-listener-log/0.log" Feb 23 19:30:50 crc kubenswrapper[4768]: I0223 19:30:50.927988 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9495fd7c-5kc55_df97f54a-8ff1-4de9-9a88-80561f4aa819/barbican-worker/0.log" Feb 23 19:30:50 crc kubenswrapper[4768]: I0223 19:30:50.993717 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9495fd7c-5kc55_df97f54a-8ff1-4de9-9a88-80561f4aa819/barbican-worker-log/0.log" Feb 23 19:30:51 crc kubenswrapper[4768]: I0223 19:30:51.162167 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc_dbe6c2e2-e359-4953-848a-c06651ec5760/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:51 crc kubenswrapper[4768]: I0223 19:30:51.189675 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19bdd7e2-6cde-4412-b74b-eedc6428ac63/ceilometer-central-agent/0.log" Feb 23 19:30:51 crc kubenswrapper[4768]: I0223 19:30:51.315489 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19bdd7e2-6cde-4412-b74b-eedc6428ac63/ceilometer-notification-agent/0.log" Feb 23 19:30:51 crc kubenswrapper[4768]: I0223 19:30:51.338525 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19bdd7e2-6cde-4412-b74b-eedc6428ac63/proxy-httpd/0.log" Feb 23 19:30:51 crc kubenswrapper[4768]: I0223 19:30:51.364729 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19bdd7e2-6cde-4412-b74b-eedc6428ac63/sg-core/0.log" Feb 23 19:30:51 crc kubenswrapper[4768]: I0223 19:30:51.563531 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_41e166f4-a4aa-4185-b21d-36037d575748/cinder-api-log/0.log" Feb 23 19:30:51 crc kubenswrapper[4768]: I0223 19:30:51.570050 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_41e166f4-a4aa-4185-b21d-36037d575748/cinder-api/0.log" Feb 23 19:30:51 crc kubenswrapper[4768]: I0223 19:30:51.681300 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3ef90267-50a1-45c4-9c1e-95f2ce0bce4b/cinder-scheduler/0.log" Feb 23 19:30:51 crc kubenswrapper[4768]: I0223 19:30:51.784090 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3ef90267-50a1-45c4-9c1e-95f2ce0bce4b/probe/0.log" Feb 23 19:30:51 crc kubenswrapper[4768]: I0223 19:30:51.856216 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq_fd5b2e52-1d19-459a-ae2f-a78b5a7df018/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:51 crc kubenswrapper[4768]: I0223 19:30:51.969399 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf_3945e9f4-308e-4769-a7b0-2984578eda25/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:52 crc kubenswrapper[4768]: I0223 19:30:52.079954 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-9b4p5_cae4398a-0817-4c3e-8449-9082d6d21b59/init/0.log" Feb 23 19:30:52 crc kubenswrapper[4768]: I0223 19:30:52.249729 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-9b4p5_cae4398a-0817-4c3e-8449-9082d6d21b59/init/0.log" Feb 23 19:30:52 crc kubenswrapper[4768]: I0223 19:30:52.320672 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-9b4p5_cae4398a-0817-4c3e-8449-9082d6d21b59/dnsmasq-dns/0.log" Feb 23 19:30:52 crc kubenswrapper[4768]: I0223 19:30:52.351612 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7_964d25fb-0600-4332-9f40-85f700d35088/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:52 crc kubenswrapper[4768]: I0223 19:30:52.529456 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ae86f9fa-10bf-4fbc-b768-0ac7e643483b/glance-httpd/0.log" Feb 23 19:30:52 crc kubenswrapper[4768]: I0223 19:30:52.542208 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ae86f9fa-10bf-4fbc-b768-0ac7e643483b/glance-log/0.log" Feb 23 19:30:52 crc kubenswrapper[4768]: I0223 19:30:52.690492 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_95f60b43-7764-4d1c-bf7f-150e7fceef75/glance-log/0.log" Feb 23 19:30:52 crc kubenswrapper[4768]: I0223 19:30:52.715104 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_95f60b43-7764-4d1c-bf7f-150e7fceef75/glance-httpd/0.log" Feb 23 19:30:52 crc kubenswrapper[4768]: I0223 19:30:52.906756 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-58cc9986b4-t7tcs_5fe017d9-f16b-465c-97a0-ebe4466006f0/horizon/0.log" Feb 23 19:30:52 crc kubenswrapper[4768]: I0223 19:30:52.979038 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f_de5a4703-0650-427d-a791-f9a3386ca413/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:53 crc kubenswrapper[4768]: I0223 19:30:53.115009 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-58cc9986b4-t7tcs_5fe017d9-f16b-465c-97a0-ebe4466006f0/horizon-log/0.log" Feb 23 19:30:53 crc kubenswrapper[4768]: I0223 19:30:53.346716 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lgdpl_fa8ac6dd-0b71-465d-8658-5c10d07f1e0c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:53 crc kubenswrapper[4768]: I0223 19:30:53.584306 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29531221-9gvxw_2f06a77a-756a-4cc8-9cea-c6c0da57bfd0/keystone-cron/0.log" Feb 23 19:30:53 crc kubenswrapper[4768]: I0223 19:30:53.627734 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7f7bc597d-jphlt_f3305106-4005-472a-980a-3030ee27d1bb/keystone-api/0.log" Feb 23 19:30:53 crc kubenswrapper[4768]: I0223 19:30:53.794808 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e07b92be-5204-4ddb-97de-24984c997328/kube-state-metrics/0.log" Feb 23 19:30:53 crc kubenswrapper[4768]: I0223 19:30:53.870494 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w_e4de542c-566e-4b7a-a999-04b1219e40a6/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:54 crc kubenswrapper[4768]: I0223 19:30:54.196206 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-546cfc7689-gsp5x_e861983f-c70e-47f3-936d-202ae74a1144/neutron-api/0.log" Feb 23 19:30:54 crc kubenswrapper[4768]: I0223 19:30:54.286523 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-546cfc7689-gsp5x_e861983f-c70e-47f3-936d-202ae74a1144/neutron-httpd/0.log" Feb 23 19:30:54 crc kubenswrapper[4768]: I0223 19:30:54.450541 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz_8126924c-9f66-4df2-ac7c-eedcd34153b7/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:54 crc kubenswrapper[4768]: I0223 19:30:54.915101 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b973b91e-764a-461b-a4ca-50185f1f70af/nova-api-log/0.log" Feb 23 19:30:54 crc kubenswrapper[4768]: I0223 19:30:54.954739 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_22014113-0a8e-4444-b685-5ab40ffc8402/nova-cell0-conductor-conductor/0.log" Feb 23 19:30:55 crc kubenswrapper[4768]: I0223 19:30:55.125088 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b973b91e-764a-461b-a4ca-50185f1f70af/nova-api-api/0.log" Feb 23 19:30:55 crc kubenswrapper[4768]: I0223 19:30:55.264773 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_15bf982c-902c-45c7-9620-095ec38e9b86/nova-cell1-conductor-conductor/0.log" Feb 23 19:30:55 crc kubenswrapper[4768]: I0223 19:30:55.298270 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c6a6d01d-0bb4-43aa-85c6-699d47fd2711/nova-cell1-novncproxy-novncproxy/0.log" Feb 23 19:30:55 crc kubenswrapper[4768]: I0223 19:30:55.437000 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-f79cf_4a3528f8-0776-47bf-81fa-c7bd1698938b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:55 crc kubenswrapper[4768]: I0223 19:30:55.583492 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_08feb509-1dff-446f-bdf1-47c5bc09f772/nova-metadata-log/0.log" Feb 23 19:30:55 crc kubenswrapper[4768]: I0223 19:30:55.885388 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e2b0c66e-d534-4e7d-91dc-f05f5f857a43/mysql-bootstrap/0.log" Feb 23 19:30:55 crc kubenswrapper[4768]: I0223 19:30:55.916043 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7a66d66d-e9d1-4407-9e7e-268f1e7f0feb/nova-scheduler-scheduler/0.log" Feb 23 19:30:56 crc kubenswrapper[4768]: I0223 19:30:56.066945 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e2b0c66e-d534-4e7d-91dc-f05f5f857a43/mysql-bootstrap/0.log" Feb 23 19:30:56 crc kubenswrapper[4768]: I0223 19:30:56.151349 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e2b0c66e-d534-4e7d-91dc-f05f5f857a43/galera/0.log" Feb 23 19:30:56 crc kubenswrapper[4768]: I0223 19:30:56.301875 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f2d53e56-3a7e-48fa-b0ea-59b932d3b25a/mysql-bootstrap/0.log" Feb 23 19:30:56 crc kubenswrapper[4768]: I0223 19:30:56.475818 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f2d53e56-3a7e-48fa-b0ea-59b932d3b25a/mysql-bootstrap/0.log" Feb 23 19:30:56 crc kubenswrapper[4768]: I0223 19:30:56.499379 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f2d53e56-3a7e-48fa-b0ea-59b932d3b25a/galera/0.log" Feb 23 19:30:56 crc kubenswrapper[4768]: I0223 19:30:56.713145 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7fa93987-e84a-4fa8-97ab-4df24aabb201/openstackclient/0.log" Feb 23 19:30:56 crc kubenswrapper[4768]: I0223 19:30:56.782051 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_08feb509-1dff-446f-bdf1-47c5bc09f772/nova-metadata-metadata/0.log" Feb 23 19:30:56 crc kubenswrapper[4768]: I0223 19:30:56.946103 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7xj45_6c33d166-1e3e-46c5-a725-472499a5efab/ovn-controller/0.log" Feb 23 19:30:57 crc kubenswrapper[4768]: I0223 19:30:57.085218 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-c45dt_1c73dca1-1a57-4c3a-8337-dba75d7e7b9c/openstack-network-exporter/0.log" Feb 23 19:30:57 crc kubenswrapper[4768]: I0223 19:30:57.189465 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9r6tg_3c7bf964-ae59-40e5-9a0c-8fd8068b6695/ovsdb-server-init/0.log" Feb 23 19:30:57 crc kubenswrapper[4768]: I0223 19:30:57.377421 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9r6tg_3c7bf964-ae59-40e5-9a0c-8fd8068b6695/ovsdb-server-init/0.log" Feb 23 19:30:57 crc kubenswrapper[4768]: I0223 19:30:57.397120 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9r6tg_3c7bf964-ae59-40e5-9a0c-8fd8068b6695/ovs-vswitchd/0.log" Feb 23 19:30:57 crc kubenswrapper[4768]: I0223 19:30:57.465087 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9r6tg_3c7bf964-ae59-40e5-9a0c-8fd8068b6695/ovsdb-server/0.log" Feb 23 19:30:57 crc kubenswrapper[4768]: I0223 19:30:57.631416 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-bkgsc_76867435-2307-4032-a6ae-203f8009d08d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:57 crc kubenswrapper[4768]: I0223 19:30:57.646849 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ffe1b163-3686-4036-8f27-a4b600234d8a/openstack-network-exporter/0.log" Feb 23 19:30:57 crc kubenswrapper[4768]: I0223 19:30:57.719459 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ffe1b163-3686-4036-8f27-a4b600234d8a/ovn-northd/0.log" Feb 23 19:30:57 crc kubenswrapper[4768]: I0223 19:30:57.849345 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3b458d35-3ae1-4a39-b1e5-dcfef430f299/openstack-network-exporter/0.log" Feb 23 19:30:57 crc kubenswrapper[4768]: I0223 19:30:57.932781 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3b458d35-3ae1-4a39-b1e5-dcfef430f299/ovsdbserver-nb/0.log" Feb 23 19:30:58 crc kubenswrapper[4768]: I0223 19:30:58.068289 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7a43d0d6-32a5-4617-8613-e7fb22a39303/openstack-network-exporter/0.log" Feb 23 19:30:58 crc kubenswrapper[4768]: I0223 19:30:58.195515 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7a43d0d6-32a5-4617-8613-e7fb22a39303/ovsdbserver-sb/0.log" Feb 23 19:30:58 crc kubenswrapper[4768]: I0223 19:30:58.375502 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-574fcfd8cb-8sv54_77c8192d-2048-476f-af50-d65602ec4d05/placement-log/0.log" Feb 23 19:30:58 crc kubenswrapper[4768]: I0223 19:30:58.377436 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-574fcfd8cb-8sv54_77c8192d-2048-476f-af50-d65602ec4d05/placement-api/0.log" Feb 23 19:30:58 crc kubenswrapper[4768]: I0223 19:30:58.463290 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b8cb5a51-f628-42ca-9f9a-002d2f2f3b00/setup-container/0.log" Feb 23 19:30:58 crc kubenswrapper[4768]: I0223 19:30:58.664279 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b8cb5a51-f628-42ca-9f9a-002d2f2f3b00/setup-container/0.log" Feb 23 19:30:58 crc kubenswrapper[4768]: I0223 19:30:58.723967 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b8cb5a51-f628-42ca-9f9a-002d2f2f3b00/rabbitmq/0.log" Feb 23 19:30:58 crc kubenswrapper[4768]: I0223 19:30:58.726606 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc/setup-container/0.log" Feb 23 19:30:58 crc kubenswrapper[4768]: I0223 19:30:58.909437 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc/setup-container/0.log" Feb 23 19:30:58 crc kubenswrapper[4768]: I0223 19:30:58.913487 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc/rabbitmq/0.log" Feb 23 19:30:59 crc kubenswrapper[4768]: I0223 19:30:59.049853 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc_68e380e8-220c-4c0e-88e4-a818fb37fe57/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:59 crc kubenswrapper[4768]: I0223 19:30:59.168452 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-9hqm7_34748e05-17f0-4701-936b-a023c3456a93/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:59 crc kubenswrapper[4768]: I0223 19:30:59.275056 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw_a7d9a362-95f1-4326-99a7-121ec8a4816f/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:59 crc kubenswrapper[4768]: I0223 19:30:59.413778 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ks98v_63675404-f203-4967-9c2b-817ff4d8715c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:30:59 crc kubenswrapper[4768]: I0223 19:30:59.576741 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-bcnv8_18767704-7745-4fb0-8802-3dc2bf209bbe/ssh-known-hosts-edpm-deployment/0.log" Feb 23 19:30:59 crc kubenswrapper[4768]: I0223 19:30:59.764592 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-66dcf5bf6c-4q2hn_70d5ee44-4e4a-4f31-8104-a72d66f78d72/proxy-server/0.log" Feb 23 19:30:59 crc kubenswrapper[4768]: I0223 19:30:59.778289 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-66dcf5bf6c-4q2hn_70d5ee44-4e4a-4f31-8104-a72d66f78d72/proxy-httpd/0.log" Feb 23 19:30:59 crc kubenswrapper[4768]: I0223 19:30:59.807490 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9nswb_1ddb02d3-f5a2-4681-90fe-4d5572fed381/swift-ring-rebalance/0.log" Feb 23 19:30:59 crc kubenswrapper[4768]: I0223 19:30:59.990990 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/account-reaper/0.log" Feb 23 19:31:00 crc kubenswrapper[4768]: I0223 19:31:00.014039 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/account-auditor/0.log" Feb 23 19:31:00 crc kubenswrapper[4768]: I0223 19:31:00.102572 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/account-replicator/0.log" Feb 23 19:31:00 crc kubenswrapper[4768]: I0223 19:31:00.305401 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/container-auditor/0.log" Feb 23 19:31:00 crc kubenswrapper[4768]: I0223 19:31:00.407880 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/account-server/0.log" Feb 23 19:31:00 crc kubenswrapper[4768]: I0223 19:31:00.506034 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/container-replicator/0.log" Feb 23 19:31:00 crc kubenswrapper[4768]: I0223 19:31:00.574770 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/container-server/0.log" Feb 23 19:31:00 crc kubenswrapper[4768]: I0223 19:31:00.604927 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/object-auditor/0.log" Feb 23 19:31:00 crc kubenswrapper[4768]: I0223 19:31:00.641263 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/container-updater/0.log" Feb 23 19:31:00 crc kubenswrapper[4768]: I0223 19:31:00.695992 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/object-expirer/0.log" Feb 23 19:31:00 crc kubenswrapper[4768]: I0223 19:31:00.827345 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/object-replicator/0.log" Feb 23 19:31:00 crc kubenswrapper[4768]: I0223 19:31:00.858328 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/object-server/0.log" Feb 23 19:31:00 crc kubenswrapper[4768]: I0223 19:31:00.872220 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/object-updater/0.log" Feb 23 19:31:00 crc kubenswrapper[4768]: I0223 19:31:00.945596 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/rsync/0.log" Feb 23 19:31:01 crc kubenswrapper[4768]: I0223 19:31:01.038031 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/swift-recon-cron/0.log" Feb 23 19:31:01 crc kubenswrapper[4768]: I0223 19:31:01.209134 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x_2393d837-c9f2-4896-ab3e-32924e48359a/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:31:01 crc kubenswrapper[4768]: I0223 19:31:01.271170 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_89c93f99-08a8-4231-8b96-d307d0525745/tempest-tests-tempest-tests-runner/0.log" Feb 23 19:31:01 crc kubenswrapper[4768]: I0223 19:31:01.454072 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_0f9b3373-10b7-4e2c-8b9f-985eb74fb53d/test-operator-logs-container/0.log" Feb 23 19:31:01 crc kubenswrapper[4768]: I0223 19:31:01.489340 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk_c1470b37-b104-4991-a626-59fcd3936f2c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:31:08 crc kubenswrapper[4768]: I0223 19:31:08.679015 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_065294f2-15e0-4aeb-9002-9602051bf4ff/memcached/0.log" Feb 23 19:31:09 crc kubenswrapper[4768]: I0223 19:31:09.544712 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:31:09 crc kubenswrapper[4768]: I0223 19:31:09.544990 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:31:09 crc kubenswrapper[4768]: I0223 19:31:09.545035 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 19:31:09 crc kubenswrapper[4768]: I0223 19:31:09.545845 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b646a8c09da3b1a57c2765094d5b5177101d4658c9a8134db1d254dc4300ce3b"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 19:31:09 crc kubenswrapper[4768]: I0223 19:31:09.545928 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://b646a8c09da3b1a57c2765094d5b5177101d4658c9a8134db1d254dc4300ce3b" gracePeriod=600 Feb 23 19:31:09 crc kubenswrapper[4768]: I0223 19:31:09.758693 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="b646a8c09da3b1a57c2765094d5b5177101d4658c9a8134db1d254dc4300ce3b" exitCode=0 Feb 23 19:31:09 crc kubenswrapper[4768]: I0223 19:31:09.758756 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"b646a8c09da3b1a57c2765094d5b5177101d4658c9a8134db1d254dc4300ce3b"} Feb 23 19:31:09 crc kubenswrapper[4768]: I0223 19:31:09.759011 4768 scope.go:117] "RemoveContainer" containerID="990d4d0f4b15e5502b01ba8cb8a9483f6ab44dd9a099902aa155c9ab08e80491" Feb 23 19:31:10 crc kubenswrapper[4768]: I0223 19:31:10.769853 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c"} Feb 23 19:31:26 crc kubenswrapper[4768]: I0223 19:31:26.875608 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/util/0.log" Feb 23 19:31:27 crc kubenswrapper[4768]: I0223 19:31:27.105900 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/util/0.log" Feb 23 19:31:27 crc kubenswrapper[4768]: I0223 19:31:27.145698 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/pull/0.log" Feb 23 19:31:27 crc kubenswrapper[4768]: I0223 19:31:27.162676 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/pull/0.log" Feb 23 19:31:27 crc kubenswrapper[4768]: I0223 19:31:27.371475 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/pull/0.log" Feb 23 19:31:27 crc kubenswrapper[4768]: I0223 19:31:27.404612 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/util/0.log" Feb 23 19:31:27 crc kubenswrapper[4768]: I0223 19:31:27.429160 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/extract/0.log" Feb 23 19:31:27 crc kubenswrapper[4768]: I0223 19:31:27.883293 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-cprlh_aba58523-2fad-45af-87ee-a347b586ad4b/manager/0.log" Feb 23 19:31:28 crc kubenswrapper[4768]: I0223 19:31:28.249541 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-chqsr_be4fc57a-a006-4068-be4b-5bdeb50f48b4/manager/0.log" Feb 23 19:31:28 crc kubenswrapper[4768]: I0223 19:31:28.364609 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-qnwrc_60e38add-201e-4431-90df-d9c31ba57f39/manager/0.log" Feb 23 19:31:28 crc kubenswrapper[4768]: I0223 19:31:28.561361 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-stm2m_0f6c6c75-0fda-41cc-b05f-cfc6e935f82b/manager/0.log" Feb 23 19:31:28 crc kubenswrapper[4768]: I0223 19:31:28.916682 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-mng89_b16ba816-bafa-430e-b18a-5afa27bc0abb/manager/0.log" Feb 23 19:31:29 crc kubenswrapper[4768]: I0223 19:31:29.057841 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-gn242_02eb4c80-855b-4590-b09e-d6e6b7919f74/manager/0.log" Feb 23 19:31:29 crc kubenswrapper[4768]: I0223 19:31:29.118806 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-wwlql_d52cf386-a646-44c0-8394-cdf497e52ebe/manager/0.log" Feb 23 19:31:29 crc kubenswrapper[4768]: I0223 19:31:29.366717 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-l5mqh_0522a131-cf71-4a3e-b60a-fa16371d47d8/manager/0.log" Feb 23 19:31:29 crc kubenswrapper[4768]: I0223 19:31:29.380856 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-xm2kv_0f3afa5e-021e-4226-9734-38d4da145e0a/manager/0.log" Feb 23 19:31:29 crc kubenswrapper[4768]: I0223 19:31:29.568153 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-mzwrn_8f3b00ff-a5fc-422c-81fd-e9c0e2a6bf1b/manager/0.log" Feb 23 19:31:29 crc kubenswrapper[4768]: I0223 19:31:29.766773 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-w5x47_9afd4512-6186-4cb8-a8ba-90628662efba/manager/0.log" Feb 23 19:31:29 crc kubenswrapper[4768]: I0223 19:31:29.887215 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-pmp8k_cbbc4a69-26c2-4d05-b369-aa142f5a04d2/manager/0.log" Feb 23 19:31:30 crc kubenswrapper[4768]: I0223 19:31:30.017993 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-7vrp5_c7086dd9-9e6f-4207-a037-99369dc6e980/manager/0.log" Feb 23 19:31:30 crc kubenswrapper[4768]: I0223 19:31:30.192771 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69_fff6d2ff-130f-45ae-943a-28b8740298c2/manager/0.log" Feb 23 19:31:30 crc kubenswrapper[4768]: I0223 19:31:30.487798 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5dfcfd9b6-jhz5n_80dc5267-2395-41a2-8e61-152b0acbc24c/operator/0.log" Feb 23 19:31:30 crc kubenswrapper[4768]: I0223 19:31:30.676840 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cwqr7_95798783-c266-4139-a43a-b4fbf879c1b8/registry-server/0.log" Feb 23 19:31:30 crc kubenswrapper[4768]: I0223 19:31:30.952814 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-g9dpw_435b416a-a73b-420a-9f48-99be70b4e110/manager/0.log" Feb 23 19:31:31 crc kubenswrapper[4768]: I0223 19:31:31.078893 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-t5qm2_ea71893a-6b37-4cc9-b0f5-be711669e8d1/manager/0.log" Feb 23 19:31:31 crc kubenswrapper[4768]: I0223 19:31:31.153104 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-dhmmp_d74d7097-0324-4bb7-83c6-fa8cea69c1b4/operator/0.log" Feb 23 19:31:31 crc kubenswrapper[4768]: I0223 19:31:31.373286 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-q66cg_13137d15-ffaa-4127-9885-91e9a6fd6a65/manager/0.log" Feb 23 19:31:31 crc kubenswrapper[4768]: I0223 19:31:31.602636 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-589c568786-6wfdk_034d1fc6-6b51-4e9a-99f9-67038d4c9926/manager/0.log" Feb 23 19:31:31 crc kubenswrapper[4768]: I0223 19:31:31.712010 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-nc28p_0b78a9a3-5a2b-435d-8e2f-661eddd91177/manager/0.log" Feb 23 19:31:31 crc kubenswrapper[4768]: I0223 19:31:31.841150 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-gn98t_86030533-da46-4579-a1ce-67f3d96c7a90/manager/0.log" Feb 23 19:31:31 crc kubenswrapper[4768]: I0223 19:31:31.950931 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7dfcb74874-dxkzr_92c4522a-291f-4c44-8e08-8e4002685f66/manager/0.log" Feb 23 19:31:33 crc kubenswrapper[4768]: I0223 19:31:33.213809 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-cj6bl_97f25c43-f624-4320-b34b-789df5cab5f3/manager/0.log" Feb 23 19:31:50 crc kubenswrapper[4768]: I0223 19:31:50.998965 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6qzzx_30b720f3-fda0-41f1-bca9-e52fe84a3535/control-plane-machine-set-operator/0.log" Feb 23 19:31:51 crc kubenswrapper[4768]: I0223 19:31:51.131748 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vn4nn_4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de/kube-rbac-proxy/0.log" Feb 23 19:31:51 crc kubenswrapper[4768]: I0223 19:31:51.189564 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vn4nn_4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de/machine-api-operator/0.log" Feb 23 19:32:05 crc kubenswrapper[4768]: I0223 19:32:05.198422 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2pxdp_ce41d193-31cd-4318-b8a6-9f0663e19dd1/cert-manager-controller/0.log" Feb 23 19:32:05 crc kubenswrapper[4768]: I0223 19:32:05.296465 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-kbhg9_2434360d-4475-492b-b0d6-d2105f2cf727/cert-manager-cainjector/0.log" Feb 23 19:32:05 crc kubenswrapper[4768]: I0223 19:32:05.369427 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-5xqnq_9e4e6814-0ed0-42f2-a94e-27bb939aa62f/cert-manager-webhook/0.log" Feb 23 19:32:18 crc kubenswrapper[4768]: I0223 19:32:18.821828 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-dstrl_37f3006a-1eda-448a-9a9a-77dd20f51534/nmstate-console-plugin/0.log" Feb 23 19:32:18 crc kubenswrapper[4768]: I0223 19:32:18.991045 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sq2t7_34f1b59b-1b5b-4093-bf9b-97d19e3118e2/nmstate-handler/0.log" Feb 23 19:32:19 crc kubenswrapper[4768]: I0223 19:32:19.002084 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-7sgf5_405a4831-883b-4d37-9b41-50b60a1268bf/kube-rbac-proxy/0.log" Feb 23 19:32:19 crc kubenswrapper[4768]: I0223 19:32:19.056600 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-7sgf5_405a4831-883b-4d37-9b41-50b60a1268bf/nmstate-metrics/0.log" Feb 23 19:32:19 crc kubenswrapper[4768]: I0223 19:32:19.169808 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-twf42_13c778fb-2aa4-4078-8393-45d0334de750/nmstate-operator/0.log" Feb 23 19:32:19 crc kubenswrapper[4768]: I0223 19:32:19.214680 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-w8cwv_f792bcb6-c414-4f4a-ae75-528cbe81b29d/nmstate-webhook/0.log" Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.709168 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rswbp"] Feb 23 19:32:35 crc kubenswrapper[4768]: E0223 19:32:35.710950 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae60615f-1836-4352-a014-033f988df75e" containerName="container-00" Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.710976 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae60615f-1836-4352-a014-033f988df75e" containerName="container-00" Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.711217 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae60615f-1836-4352-a014-033f988df75e" containerName="container-00" Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.712907 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.729586 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rswbp"] Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.784923 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4484bb17-b49b-40f3-841c-316f6b6a7555-utilities\") pod \"redhat-operators-rswbp\" (UID: \"4484bb17-b49b-40f3-841c-316f6b6a7555\") " pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.784983 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4484bb17-b49b-40f3-841c-316f6b6a7555-catalog-content\") pod \"redhat-operators-rswbp\" (UID: \"4484bb17-b49b-40f3-841c-316f6b6a7555\") " pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.785015 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7sk5\" (UniqueName: \"kubernetes.io/projected/4484bb17-b49b-40f3-841c-316f6b6a7555-kube-api-access-f7sk5\") pod \"redhat-operators-rswbp\" (UID: \"4484bb17-b49b-40f3-841c-316f6b6a7555\") " pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.887022 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4484bb17-b49b-40f3-841c-316f6b6a7555-utilities\") pod \"redhat-operators-rswbp\" (UID: \"4484bb17-b49b-40f3-841c-316f6b6a7555\") " pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.887062 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4484bb17-b49b-40f3-841c-316f6b6a7555-catalog-content\") pod \"redhat-operators-rswbp\" (UID: \"4484bb17-b49b-40f3-841c-316f6b6a7555\") " pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.887086 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7sk5\" (UniqueName: \"kubernetes.io/projected/4484bb17-b49b-40f3-841c-316f6b6a7555-kube-api-access-f7sk5\") pod \"redhat-operators-rswbp\" (UID: \"4484bb17-b49b-40f3-841c-316f6b6a7555\") " pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.887633 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4484bb17-b49b-40f3-841c-316f6b6a7555-utilities\") pod \"redhat-operators-rswbp\" (UID: \"4484bb17-b49b-40f3-841c-316f6b6a7555\") " pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.887658 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4484bb17-b49b-40f3-841c-316f6b6a7555-catalog-content\") pod \"redhat-operators-rswbp\" (UID: \"4484bb17-b49b-40f3-841c-316f6b6a7555\") " pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:35 crc kubenswrapper[4768]: I0223 19:32:35.905319 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7sk5\" (UniqueName: \"kubernetes.io/projected/4484bb17-b49b-40f3-841c-316f6b6a7555-kube-api-access-f7sk5\") pod \"redhat-operators-rswbp\" (UID: \"4484bb17-b49b-40f3-841c-316f6b6a7555\") " pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:36 crc kubenswrapper[4768]: I0223 19:32:36.030483 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:36 crc kubenswrapper[4768]: I0223 19:32:36.587744 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rswbp"] Feb 23 19:32:36 crc kubenswrapper[4768]: I0223 19:32:36.613180 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rswbp" event={"ID":"4484bb17-b49b-40f3-841c-316f6b6a7555","Type":"ContainerStarted","Data":"73d006fbd870297bff7ace9de06fefbaddbedbd079bcdad60345bd10a28dabbe"} Feb 23 19:32:37 crc kubenswrapper[4768]: I0223 19:32:37.625262 4768 generic.go:334] "Generic (PLEG): container finished" podID="4484bb17-b49b-40f3-841c-316f6b6a7555" containerID="c2854eb5059d239dd02e0df811f11c575a1d0538759955e99b0a4ef8540bb1ff" exitCode=0 Feb 23 19:32:37 crc kubenswrapper[4768]: I0223 19:32:37.625357 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rswbp" event={"ID":"4484bb17-b49b-40f3-841c-316f6b6a7555","Type":"ContainerDied","Data":"c2854eb5059d239dd02e0df811f11c575a1d0538759955e99b0a4ef8540bb1ff"} Feb 23 19:32:38 crc kubenswrapper[4768]: I0223 19:32:38.637637 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rswbp" event={"ID":"4484bb17-b49b-40f3-841c-316f6b6a7555","Type":"ContainerStarted","Data":"78aaaea6c5127b85622671956220c267950c1b485f2993865c84f5e3adde47f9"} Feb 23 19:32:41 crc kubenswrapper[4768]: I0223 19:32:41.671921 4768 generic.go:334] "Generic (PLEG): container finished" podID="4484bb17-b49b-40f3-841c-316f6b6a7555" containerID="78aaaea6c5127b85622671956220c267950c1b485f2993865c84f5e3adde47f9" exitCode=0 Feb 23 19:32:41 crc kubenswrapper[4768]: I0223 19:32:41.672002 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rswbp" event={"ID":"4484bb17-b49b-40f3-841c-316f6b6a7555","Type":"ContainerDied","Data":"78aaaea6c5127b85622671956220c267950c1b485f2993865c84f5e3adde47f9"} Feb 23 19:32:42 crc kubenswrapper[4768]: I0223 19:32:42.684636 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rswbp" event={"ID":"4484bb17-b49b-40f3-841c-316f6b6a7555","Type":"ContainerStarted","Data":"44dccb1a43982f43d6008ef106fe14941df6cebda22efcde633b418f9dc7ea1b"} Feb 23 19:32:42 crc kubenswrapper[4768]: I0223 19:32:42.714039 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rswbp" podStartSLOduration=3.225128874 podStartE2EDuration="7.714016613s" podCreationTimestamp="2026-02-23 19:32:35 +0000 UTC" firstStartedPulling="2026-02-23 19:32:37.634692782 +0000 UTC m=+3553.025178572" lastFinishedPulling="2026-02-23 19:32:42.123580501 +0000 UTC m=+3557.514066311" observedRunningTime="2026-02-23 19:32:42.708316379 +0000 UTC m=+3558.098802219" watchObservedRunningTime="2026-02-23 19:32:42.714016613 +0000 UTC m=+3558.104502413" Feb 23 19:32:46 crc kubenswrapper[4768]: I0223 19:32:46.030659 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:46 crc kubenswrapper[4768]: I0223 19:32:46.031062 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:46 crc kubenswrapper[4768]: I0223 19:32:46.978800 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-brfkz"] Feb 23 19:32:46 crc kubenswrapper[4768]: I0223 19:32:46.981112 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:46 crc kubenswrapper[4768]: I0223 19:32:46.989808 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brfkz"] Feb 23 19:32:47 crc kubenswrapper[4768]: I0223 19:32:47.020399 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e570c6f5-86d8-43b6-a6ad-b6b40f674055-catalog-content\") pod \"redhat-marketplace-brfkz\" (UID: \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\") " pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:47 crc kubenswrapper[4768]: I0223 19:32:47.020594 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttgmw\" (UniqueName: \"kubernetes.io/projected/e570c6f5-86d8-43b6-a6ad-b6b40f674055-kube-api-access-ttgmw\") pod \"redhat-marketplace-brfkz\" (UID: \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\") " pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:47 crc kubenswrapper[4768]: I0223 19:32:47.020703 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e570c6f5-86d8-43b6-a6ad-b6b40f674055-utilities\") pod \"redhat-marketplace-brfkz\" (UID: \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\") " pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:47 crc kubenswrapper[4768]: I0223 19:32:47.078966 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rswbp" podUID="4484bb17-b49b-40f3-841c-316f6b6a7555" containerName="registry-server" probeResult="failure" output=< Feb 23 19:32:47 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 23 19:32:47 crc kubenswrapper[4768]: > Feb 23 19:32:47 crc kubenswrapper[4768]: I0223 19:32:47.122780 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e570c6f5-86d8-43b6-a6ad-b6b40f674055-utilities\") pod \"redhat-marketplace-brfkz\" (UID: \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\") " pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:47 crc kubenswrapper[4768]: I0223 19:32:47.122841 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e570c6f5-86d8-43b6-a6ad-b6b40f674055-catalog-content\") pod \"redhat-marketplace-brfkz\" (UID: \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\") " pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:47 crc kubenswrapper[4768]: I0223 19:32:47.122934 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttgmw\" (UniqueName: \"kubernetes.io/projected/e570c6f5-86d8-43b6-a6ad-b6b40f674055-kube-api-access-ttgmw\") pod \"redhat-marketplace-brfkz\" (UID: \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\") " pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:47 crc kubenswrapper[4768]: I0223 19:32:47.123334 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e570c6f5-86d8-43b6-a6ad-b6b40f674055-utilities\") pod \"redhat-marketplace-brfkz\" (UID: \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\") " pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:47 crc kubenswrapper[4768]: I0223 19:32:47.123364 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e570c6f5-86d8-43b6-a6ad-b6b40f674055-catalog-content\") pod \"redhat-marketplace-brfkz\" (UID: \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\") " pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:47 crc kubenswrapper[4768]: I0223 19:32:47.147679 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttgmw\" (UniqueName: \"kubernetes.io/projected/e570c6f5-86d8-43b6-a6ad-b6b40f674055-kube-api-access-ttgmw\") pod \"redhat-marketplace-brfkz\" (UID: \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\") " pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:47 crc kubenswrapper[4768]: I0223 19:32:47.301658 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:47 crc kubenswrapper[4768]: I0223 19:32:47.954389 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-brfkz"] Feb 23 19:32:47 crc kubenswrapper[4768]: W0223 19:32:47.970495 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode570c6f5_86d8_43b6_a6ad_b6b40f674055.slice/crio-fa66b47fbb83a233de16ef396cea614a77ce7e55183ae10a1f5a5576a2562d0a WatchSource:0}: Error finding container fa66b47fbb83a233de16ef396cea614a77ce7e55183ae10a1f5a5576a2562d0a: Status 404 returned error can't find the container with id fa66b47fbb83a233de16ef396cea614a77ce7e55183ae10a1f5a5576a2562d0a Feb 23 19:32:48 crc kubenswrapper[4768]: I0223 19:32:48.315313 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-8snqf_a02480fd-a2d6-4364-b83f-e01dfa5a6676/controller/0.log" Feb 23 19:32:48 crc kubenswrapper[4768]: I0223 19:32:48.405126 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-8snqf_a02480fd-a2d6-4364-b83f-e01dfa5a6676/kube-rbac-proxy/0.log" Feb 23 19:32:48 crc kubenswrapper[4768]: I0223 19:32:48.610751 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-frr-files/0.log" Feb 23 19:32:48 crc kubenswrapper[4768]: I0223 19:32:48.742337 4768 generic.go:334] "Generic (PLEG): container finished" podID="e570c6f5-86d8-43b6-a6ad-b6b40f674055" containerID="21a2a5cd50ff49d7e218368e8b2e8a51480dc3dea4948f9d42d7878511e4145d" exitCode=0 Feb 23 19:32:48 crc kubenswrapper[4768]: I0223 19:32:48.742385 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfkz" event={"ID":"e570c6f5-86d8-43b6-a6ad-b6b40f674055","Type":"ContainerDied","Data":"21a2a5cd50ff49d7e218368e8b2e8a51480dc3dea4948f9d42d7878511e4145d"} Feb 23 19:32:48 crc kubenswrapper[4768]: I0223 19:32:48.742415 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfkz" event={"ID":"e570c6f5-86d8-43b6-a6ad-b6b40f674055","Type":"ContainerStarted","Data":"fa66b47fbb83a233de16ef396cea614a77ce7e55183ae10a1f5a5576a2562d0a"} Feb 23 19:32:48 crc kubenswrapper[4768]: I0223 19:32:48.966043 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-reloader/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.008064 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-metrics/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.040005 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-frr-files/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.119640 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-reloader/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.296938 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-reloader/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.304385 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-metrics/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.331154 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-frr-files/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.353875 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-metrics/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.513391 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-reloader/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.550725 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-frr-files/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.555550 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-metrics/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.568053 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/controller/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.702625 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/kube-rbac-proxy/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.732077 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/frr-metrics/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.758753 4768 generic.go:334] "Generic (PLEG): container finished" podID="e570c6f5-86d8-43b6-a6ad-b6b40f674055" containerID="ef5e164c04655529cff3eb80f0e3d0be2d23a7e8194583d1052ff235353349a8" exitCode=0 Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.758793 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfkz" event={"ID":"e570c6f5-86d8-43b6-a6ad-b6b40f674055","Type":"ContainerDied","Data":"ef5e164c04655529cff3eb80f0e3d0be2d23a7e8194583d1052ff235353349a8"} Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.773068 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/kube-rbac-proxy-frr/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.930403 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/reloader/0.log" Feb 23 19:32:49 crc kubenswrapper[4768]: I0223 19:32:49.997103 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-sglwm_06a269f1-e448-49da-b22d-7ef6bcfe31e1/frr-k8s-webhook-server/0.log" Feb 23 19:32:50 crc kubenswrapper[4768]: I0223 19:32:50.216683 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-655544f676-lzj52_e250524b-d6cd-444e-9e6b-3a2a5387d3b2/manager/0.log" Feb 23 19:32:50 crc kubenswrapper[4768]: I0223 19:32:50.408443 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5cf8d9bdbb-l2w9j_8c32327a-6231-46a7-9d4b-e0ef86979632/webhook-server/0.log" Feb 23 19:32:50 crc kubenswrapper[4768]: I0223 19:32:50.523594 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-knv9f_bc147539-1205-4a1f-82d6-ca40f47d37d0/kube-rbac-proxy/0.log" Feb 23 19:32:50 crc kubenswrapper[4768]: I0223 19:32:50.769466 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfkz" event={"ID":"e570c6f5-86d8-43b6-a6ad-b6b40f674055","Type":"ContainerStarted","Data":"490bf6a0146bf8c5a211fe7ebe3bd667cfcb97baea0c8bece98a802382e622e0"} Feb 23 19:32:50 crc kubenswrapper[4768]: I0223 19:32:50.805098 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-brfkz" podStartSLOduration=3.420283684 podStartE2EDuration="4.805077586s" podCreationTimestamp="2026-02-23 19:32:46 +0000 UTC" firstStartedPulling="2026-02-23 19:32:48.744163951 +0000 UTC m=+3564.134649751" lastFinishedPulling="2026-02-23 19:32:50.128957853 +0000 UTC m=+3565.519443653" observedRunningTime="2026-02-23 19:32:50.795885898 +0000 UTC m=+3566.186371698" watchObservedRunningTime="2026-02-23 19:32:50.805077586 +0000 UTC m=+3566.195563386" Feb 23 19:32:51 crc kubenswrapper[4768]: I0223 19:32:51.122232 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-knv9f_bc147539-1205-4a1f-82d6-ca40f47d37d0/speaker/0.log" Feb 23 19:32:51 crc kubenswrapper[4768]: I0223 19:32:51.156830 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/frr/0.log" Feb 23 19:32:56 crc kubenswrapper[4768]: I0223 19:32:56.112285 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:56 crc kubenswrapper[4768]: I0223 19:32:56.164308 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:56 crc kubenswrapper[4768]: I0223 19:32:56.356775 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rswbp"] Feb 23 19:32:57 crc kubenswrapper[4768]: I0223 19:32:57.301949 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:57 crc kubenswrapper[4768]: I0223 19:32:57.302178 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:57 crc kubenswrapper[4768]: I0223 19:32:57.360501 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:57 crc kubenswrapper[4768]: I0223 19:32:57.833651 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rswbp" podUID="4484bb17-b49b-40f3-841c-316f6b6a7555" containerName="registry-server" containerID="cri-o://44dccb1a43982f43d6008ef106fe14941df6cebda22efcde633b418f9dc7ea1b" gracePeriod=2 Feb 23 19:32:57 crc kubenswrapper[4768]: I0223 19:32:57.901591 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.383640 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.464016 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4484bb17-b49b-40f3-841c-316f6b6a7555-utilities\") pod \"4484bb17-b49b-40f3-841c-316f6b6a7555\" (UID: \"4484bb17-b49b-40f3-841c-316f6b6a7555\") " Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.464112 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7sk5\" (UniqueName: \"kubernetes.io/projected/4484bb17-b49b-40f3-841c-316f6b6a7555-kube-api-access-f7sk5\") pod \"4484bb17-b49b-40f3-841c-316f6b6a7555\" (UID: \"4484bb17-b49b-40f3-841c-316f6b6a7555\") " Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.464228 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4484bb17-b49b-40f3-841c-316f6b6a7555-catalog-content\") pod \"4484bb17-b49b-40f3-841c-316f6b6a7555\" (UID: \"4484bb17-b49b-40f3-841c-316f6b6a7555\") " Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.464954 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4484bb17-b49b-40f3-841c-316f6b6a7555-utilities" (OuterVolumeSpecName: "utilities") pod "4484bb17-b49b-40f3-841c-316f6b6a7555" (UID: "4484bb17-b49b-40f3-841c-316f6b6a7555"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.468815 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4484bb17-b49b-40f3-841c-316f6b6a7555-kube-api-access-f7sk5" (OuterVolumeSpecName: "kube-api-access-f7sk5") pod "4484bb17-b49b-40f3-841c-316f6b6a7555" (UID: "4484bb17-b49b-40f3-841c-316f6b6a7555"). InnerVolumeSpecName "kube-api-access-f7sk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.566377 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4484bb17-b49b-40f3-841c-316f6b6a7555-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.566412 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7sk5\" (UniqueName: \"kubernetes.io/projected/4484bb17-b49b-40f3-841c-316f6b6a7555-kube-api-access-f7sk5\") on node \"crc\" DevicePath \"\"" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.583119 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4484bb17-b49b-40f3-841c-316f6b6a7555-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4484bb17-b49b-40f3-841c-316f6b6a7555" (UID: "4484bb17-b49b-40f3-841c-316f6b6a7555"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.667906 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4484bb17-b49b-40f3-841c-316f6b6a7555-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.845604 4768 generic.go:334] "Generic (PLEG): container finished" podID="4484bb17-b49b-40f3-841c-316f6b6a7555" containerID="44dccb1a43982f43d6008ef106fe14941df6cebda22efcde633b418f9dc7ea1b" exitCode=0 Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.845678 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rswbp" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.845729 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rswbp" event={"ID":"4484bb17-b49b-40f3-841c-316f6b6a7555","Type":"ContainerDied","Data":"44dccb1a43982f43d6008ef106fe14941df6cebda22efcde633b418f9dc7ea1b"} Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.845793 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rswbp" event={"ID":"4484bb17-b49b-40f3-841c-316f6b6a7555","Type":"ContainerDied","Data":"73d006fbd870297bff7ace9de06fefbaddbedbd079bcdad60345bd10a28dabbe"} Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.845817 4768 scope.go:117] "RemoveContainer" containerID="44dccb1a43982f43d6008ef106fe14941df6cebda22efcde633b418f9dc7ea1b" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.867639 4768 scope.go:117] "RemoveContainer" containerID="78aaaea6c5127b85622671956220c267950c1b485f2993865c84f5e3adde47f9" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.882868 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rswbp"] Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.891065 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rswbp"] Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.901419 4768 scope.go:117] "RemoveContainer" containerID="c2854eb5059d239dd02e0df811f11c575a1d0538759955e99b0a4ef8540bb1ff" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.953081 4768 scope.go:117] "RemoveContainer" containerID="44dccb1a43982f43d6008ef106fe14941df6cebda22efcde633b418f9dc7ea1b" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.954969 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-brfkz"] Feb 23 19:32:58 crc kubenswrapper[4768]: E0223 19:32:58.955340 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44dccb1a43982f43d6008ef106fe14941df6cebda22efcde633b418f9dc7ea1b\": container with ID starting with 44dccb1a43982f43d6008ef106fe14941df6cebda22efcde633b418f9dc7ea1b not found: ID does not exist" containerID="44dccb1a43982f43d6008ef106fe14941df6cebda22efcde633b418f9dc7ea1b" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.955388 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44dccb1a43982f43d6008ef106fe14941df6cebda22efcde633b418f9dc7ea1b"} err="failed to get container status \"44dccb1a43982f43d6008ef106fe14941df6cebda22efcde633b418f9dc7ea1b\": rpc error: code = NotFound desc = could not find container \"44dccb1a43982f43d6008ef106fe14941df6cebda22efcde633b418f9dc7ea1b\": container with ID starting with 44dccb1a43982f43d6008ef106fe14941df6cebda22efcde633b418f9dc7ea1b not found: ID does not exist" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.955415 4768 scope.go:117] "RemoveContainer" containerID="78aaaea6c5127b85622671956220c267950c1b485f2993865c84f5e3adde47f9" Feb 23 19:32:58 crc kubenswrapper[4768]: E0223 19:32:58.955804 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78aaaea6c5127b85622671956220c267950c1b485f2993865c84f5e3adde47f9\": container with ID starting with 78aaaea6c5127b85622671956220c267950c1b485f2993865c84f5e3adde47f9 not found: ID does not exist" containerID="78aaaea6c5127b85622671956220c267950c1b485f2993865c84f5e3adde47f9" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.955827 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78aaaea6c5127b85622671956220c267950c1b485f2993865c84f5e3adde47f9"} err="failed to get container status \"78aaaea6c5127b85622671956220c267950c1b485f2993865c84f5e3adde47f9\": rpc error: code = NotFound desc = could not find container \"78aaaea6c5127b85622671956220c267950c1b485f2993865c84f5e3adde47f9\": container with ID starting with 78aaaea6c5127b85622671956220c267950c1b485f2993865c84f5e3adde47f9 not found: ID does not exist" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.955841 4768 scope.go:117] "RemoveContainer" containerID="c2854eb5059d239dd02e0df811f11c575a1d0538759955e99b0a4ef8540bb1ff" Feb 23 19:32:58 crc kubenswrapper[4768]: E0223 19:32:58.956075 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2854eb5059d239dd02e0df811f11c575a1d0538759955e99b0a4ef8540bb1ff\": container with ID starting with c2854eb5059d239dd02e0df811f11c575a1d0538759955e99b0a4ef8540bb1ff not found: ID does not exist" containerID="c2854eb5059d239dd02e0df811f11c575a1d0538759955e99b0a4ef8540bb1ff" Feb 23 19:32:58 crc kubenswrapper[4768]: I0223 19:32:58.956095 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2854eb5059d239dd02e0df811f11c575a1d0538759955e99b0a4ef8540bb1ff"} err="failed to get container status \"c2854eb5059d239dd02e0df811f11c575a1d0538759955e99b0a4ef8540bb1ff\": rpc error: code = NotFound desc = could not find container \"c2854eb5059d239dd02e0df811f11c575a1d0538759955e99b0a4ef8540bb1ff\": container with ID starting with c2854eb5059d239dd02e0df811f11c575a1d0538759955e99b0a4ef8540bb1ff not found: ID does not exist" Feb 23 19:32:59 crc kubenswrapper[4768]: I0223 19:32:59.321759 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4484bb17-b49b-40f3-841c-316f6b6a7555" path="/var/lib/kubelet/pods/4484bb17-b49b-40f3-841c-316f6b6a7555/volumes" Feb 23 19:33:00 crc kubenswrapper[4768]: I0223 19:33:00.864296 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-brfkz" podUID="e570c6f5-86d8-43b6-a6ad-b6b40f674055" containerName="registry-server" containerID="cri-o://490bf6a0146bf8c5a211fe7ebe3bd667cfcb97baea0c8bece98a802382e622e0" gracePeriod=2 Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.398936 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.538796 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttgmw\" (UniqueName: \"kubernetes.io/projected/e570c6f5-86d8-43b6-a6ad-b6b40f674055-kube-api-access-ttgmw\") pod \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\" (UID: \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\") " Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.539146 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e570c6f5-86d8-43b6-a6ad-b6b40f674055-utilities\") pod \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\" (UID: \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\") " Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.539196 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e570c6f5-86d8-43b6-a6ad-b6b40f674055-catalog-content\") pod \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\" (UID: \"e570c6f5-86d8-43b6-a6ad-b6b40f674055\") " Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.542846 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e570c6f5-86d8-43b6-a6ad-b6b40f674055-utilities" (OuterVolumeSpecName: "utilities") pod "e570c6f5-86d8-43b6-a6ad-b6b40f674055" (UID: "e570c6f5-86d8-43b6-a6ad-b6b40f674055"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.551085 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e570c6f5-86d8-43b6-a6ad-b6b40f674055-kube-api-access-ttgmw" (OuterVolumeSpecName: "kube-api-access-ttgmw") pod "e570c6f5-86d8-43b6-a6ad-b6b40f674055" (UID: "e570c6f5-86d8-43b6-a6ad-b6b40f674055"). InnerVolumeSpecName "kube-api-access-ttgmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.572553 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e570c6f5-86d8-43b6-a6ad-b6b40f674055-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e570c6f5-86d8-43b6-a6ad-b6b40f674055" (UID: "e570c6f5-86d8-43b6-a6ad-b6b40f674055"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.641176 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e570c6f5-86d8-43b6-a6ad-b6b40f674055-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.641215 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e570c6f5-86d8-43b6-a6ad-b6b40f674055-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.641237 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttgmw\" (UniqueName: \"kubernetes.io/projected/e570c6f5-86d8-43b6-a6ad-b6b40f674055-kube-api-access-ttgmw\") on node \"crc\" DevicePath \"\"" Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.875864 4768 generic.go:334] "Generic (PLEG): container finished" podID="e570c6f5-86d8-43b6-a6ad-b6b40f674055" containerID="490bf6a0146bf8c5a211fe7ebe3bd667cfcb97baea0c8bece98a802382e622e0" exitCode=0 Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.875909 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfkz" event={"ID":"e570c6f5-86d8-43b6-a6ad-b6b40f674055","Type":"ContainerDied","Data":"490bf6a0146bf8c5a211fe7ebe3bd667cfcb97baea0c8bece98a802382e622e0"} Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.875935 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-brfkz" event={"ID":"e570c6f5-86d8-43b6-a6ad-b6b40f674055","Type":"ContainerDied","Data":"fa66b47fbb83a233de16ef396cea614a77ce7e55183ae10a1f5a5576a2562d0a"} Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.875935 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-brfkz" Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.875981 4768 scope.go:117] "RemoveContainer" containerID="490bf6a0146bf8c5a211fe7ebe3bd667cfcb97baea0c8bece98a802382e622e0" Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.916632 4768 scope.go:117] "RemoveContainer" containerID="ef5e164c04655529cff3eb80f0e3d0be2d23a7e8194583d1052ff235353349a8" Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.918336 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-brfkz"] Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.939582 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-brfkz"] Feb 23 19:33:01 crc kubenswrapper[4768]: I0223 19:33:01.952362 4768 scope.go:117] "RemoveContainer" containerID="21a2a5cd50ff49d7e218368e8b2e8a51480dc3dea4948f9d42d7878511e4145d" Feb 23 19:33:02 crc kubenswrapper[4768]: I0223 19:33:02.004687 4768 scope.go:117] "RemoveContainer" containerID="490bf6a0146bf8c5a211fe7ebe3bd667cfcb97baea0c8bece98a802382e622e0" Feb 23 19:33:02 crc kubenswrapper[4768]: E0223 19:33:02.005431 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"490bf6a0146bf8c5a211fe7ebe3bd667cfcb97baea0c8bece98a802382e622e0\": container with ID starting with 490bf6a0146bf8c5a211fe7ebe3bd667cfcb97baea0c8bece98a802382e622e0 not found: ID does not exist" containerID="490bf6a0146bf8c5a211fe7ebe3bd667cfcb97baea0c8bece98a802382e622e0" Feb 23 19:33:02 crc kubenswrapper[4768]: I0223 19:33:02.005488 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"490bf6a0146bf8c5a211fe7ebe3bd667cfcb97baea0c8bece98a802382e622e0"} err="failed to get container status \"490bf6a0146bf8c5a211fe7ebe3bd667cfcb97baea0c8bece98a802382e622e0\": rpc error: code = NotFound desc = could not find container \"490bf6a0146bf8c5a211fe7ebe3bd667cfcb97baea0c8bece98a802382e622e0\": container with ID starting with 490bf6a0146bf8c5a211fe7ebe3bd667cfcb97baea0c8bece98a802382e622e0 not found: ID does not exist" Feb 23 19:33:02 crc kubenswrapper[4768]: I0223 19:33:02.005520 4768 scope.go:117] "RemoveContainer" containerID="ef5e164c04655529cff3eb80f0e3d0be2d23a7e8194583d1052ff235353349a8" Feb 23 19:33:02 crc kubenswrapper[4768]: E0223 19:33:02.005907 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef5e164c04655529cff3eb80f0e3d0be2d23a7e8194583d1052ff235353349a8\": container with ID starting with ef5e164c04655529cff3eb80f0e3d0be2d23a7e8194583d1052ff235353349a8 not found: ID does not exist" containerID="ef5e164c04655529cff3eb80f0e3d0be2d23a7e8194583d1052ff235353349a8" Feb 23 19:33:02 crc kubenswrapper[4768]: I0223 19:33:02.005957 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef5e164c04655529cff3eb80f0e3d0be2d23a7e8194583d1052ff235353349a8"} err="failed to get container status \"ef5e164c04655529cff3eb80f0e3d0be2d23a7e8194583d1052ff235353349a8\": rpc error: code = NotFound desc = could not find container \"ef5e164c04655529cff3eb80f0e3d0be2d23a7e8194583d1052ff235353349a8\": container with ID starting with ef5e164c04655529cff3eb80f0e3d0be2d23a7e8194583d1052ff235353349a8 not found: ID does not exist" Feb 23 19:33:02 crc kubenswrapper[4768]: I0223 19:33:02.005987 4768 scope.go:117] "RemoveContainer" containerID="21a2a5cd50ff49d7e218368e8b2e8a51480dc3dea4948f9d42d7878511e4145d" Feb 23 19:33:02 crc kubenswrapper[4768]: E0223 19:33:02.006378 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a2a5cd50ff49d7e218368e8b2e8a51480dc3dea4948f9d42d7878511e4145d\": container with ID starting with 21a2a5cd50ff49d7e218368e8b2e8a51480dc3dea4948f9d42d7878511e4145d not found: ID does not exist" containerID="21a2a5cd50ff49d7e218368e8b2e8a51480dc3dea4948f9d42d7878511e4145d" Feb 23 19:33:02 crc kubenswrapper[4768]: I0223 19:33:02.006410 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a2a5cd50ff49d7e218368e8b2e8a51480dc3dea4948f9d42d7878511e4145d"} err="failed to get container status \"21a2a5cd50ff49d7e218368e8b2e8a51480dc3dea4948f9d42d7878511e4145d\": rpc error: code = NotFound desc = could not find container \"21a2a5cd50ff49d7e218368e8b2e8a51480dc3dea4948f9d42d7878511e4145d\": container with ID starting with 21a2a5cd50ff49d7e218368e8b2e8a51480dc3dea4948f9d42d7878511e4145d not found: ID does not exist" Feb 23 19:33:03 crc kubenswrapper[4768]: I0223 19:33:03.323696 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e570c6f5-86d8-43b6-a6ad-b6b40f674055" path="/var/lib/kubelet/pods/e570c6f5-86d8-43b6-a6ad-b6b40f674055/volumes" Feb 23 19:33:05 crc kubenswrapper[4768]: I0223 19:33:05.622420 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/util/0.log" Feb 23 19:33:05 crc kubenswrapper[4768]: I0223 19:33:05.795868 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/util/0.log" Feb 23 19:33:05 crc kubenswrapper[4768]: I0223 19:33:05.851298 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/pull/0.log" Feb 23 19:33:05 crc kubenswrapper[4768]: I0223 19:33:05.859431 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/pull/0.log" Feb 23 19:33:06 crc kubenswrapper[4768]: I0223 19:33:06.035589 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/pull/0.log" Feb 23 19:33:06 crc kubenswrapper[4768]: I0223 19:33:06.065169 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/util/0.log" Feb 23 19:33:06 crc kubenswrapper[4768]: I0223 19:33:06.109633 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/extract/0.log" Feb 23 19:33:06 crc kubenswrapper[4768]: I0223 19:33:06.249900 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/extract-utilities/0.log" Feb 23 19:33:06 crc kubenswrapper[4768]: I0223 19:33:06.439827 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/extract-content/0.log" Feb 23 19:33:06 crc kubenswrapper[4768]: I0223 19:33:06.450106 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/extract-utilities/0.log" Feb 23 19:33:06 crc kubenswrapper[4768]: I0223 19:33:06.508309 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/extract-content/0.log" Feb 23 19:33:06 crc kubenswrapper[4768]: I0223 19:33:06.634199 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/extract-utilities/0.log" Feb 23 19:33:06 crc kubenswrapper[4768]: I0223 19:33:06.653737 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/extract-content/0.log" Feb 23 19:33:06 crc kubenswrapper[4768]: I0223 19:33:06.874934 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/extract-utilities/0.log" Feb 23 19:33:07 crc kubenswrapper[4768]: I0223 19:33:07.016331 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/registry-server/0.log" Feb 23 19:33:07 crc kubenswrapper[4768]: I0223 19:33:07.087082 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/extract-utilities/0.log" Feb 23 19:33:07 crc kubenswrapper[4768]: I0223 19:33:07.089055 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/extract-content/0.log" Feb 23 19:33:07 crc kubenswrapper[4768]: I0223 19:33:07.156149 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/extract-content/0.log" Feb 23 19:33:07 crc kubenswrapper[4768]: I0223 19:33:07.341826 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/extract-content/0.log" Feb 23 19:33:07 crc kubenswrapper[4768]: I0223 19:33:07.390490 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/extract-utilities/0.log" Feb 23 19:33:07 crc kubenswrapper[4768]: I0223 19:33:07.575610 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/util/0.log" Feb 23 19:33:07 crc kubenswrapper[4768]: I0223 19:33:07.737009 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/registry-server/0.log" Feb 23 19:33:07 crc kubenswrapper[4768]: I0223 19:33:07.934279 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/pull/0.log" Feb 23 19:33:07 crc kubenswrapper[4768]: I0223 19:33:07.934425 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/pull/0.log" Feb 23 19:33:07 crc kubenswrapper[4768]: I0223 19:33:07.941358 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/util/0.log" Feb 23 19:33:08 crc kubenswrapper[4768]: I0223 19:33:08.084309 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/pull/0.log" Feb 23 19:33:08 crc kubenswrapper[4768]: I0223 19:33:08.093071 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/util/0.log" Feb 23 19:33:08 crc kubenswrapper[4768]: I0223 19:33:08.126867 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/extract/0.log" Feb 23 19:33:08 crc kubenswrapper[4768]: I0223 19:33:08.262198 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-n6tjv_c25ac972-0ed9-475d-b506-222f90fe52f9/marketplace-operator/0.log" Feb 23 19:33:08 crc kubenswrapper[4768]: I0223 19:33:08.345816 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/extract-utilities/0.log" Feb 23 19:33:08 crc kubenswrapper[4768]: I0223 19:33:08.496444 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/extract-utilities/0.log" Feb 23 19:33:08 crc kubenswrapper[4768]: I0223 19:33:08.559643 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/extract-content/0.log" Feb 23 19:33:08 crc kubenswrapper[4768]: I0223 19:33:08.566179 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/extract-content/0.log" Feb 23 19:33:08 crc kubenswrapper[4768]: I0223 19:33:08.710211 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/extract-content/0.log" Feb 23 19:33:08 crc kubenswrapper[4768]: I0223 19:33:08.729737 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/extract-utilities/0.log" Feb 23 19:33:08 crc kubenswrapper[4768]: I0223 19:33:08.832414 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/registry-server/0.log" Feb 23 19:33:08 crc kubenswrapper[4768]: I0223 19:33:08.935115 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/extract-utilities/0.log" Feb 23 19:33:09 crc kubenswrapper[4768]: I0223 19:33:09.077295 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/extract-content/0.log" Feb 23 19:33:09 crc kubenswrapper[4768]: I0223 19:33:09.092413 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/extract-content/0.log" Feb 23 19:33:09 crc kubenswrapper[4768]: I0223 19:33:09.127399 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/extract-utilities/0.log" Feb 23 19:33:09 crc kubenswrapper[4768]: I0223 19:33:09.368187 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/extract-utilities/0.log" Feb 23 19:33:09 crc kubenswrapper[4768]: I0223 19:33:09.399448 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/extract-content/0.log" Feb 23 19:33:09 crc kubenswrapper[4768]: I0223 19:33:09.544577 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:33:09 crc kubenswrapper[4768]: I0223 19:33:09.544643 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:33:09 crc kubenswrapper[4768]: I0223 19:33:09.803560 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/registry-server/0.log" Feb 23 19:33:33 crc kubenswrapper[4768]: E0223 19:33:33.576398 4768 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.115:51824->38.102.83.115:37299: write tcp 38.102.83.115:51824->38.102.83.115:37299: write: broken pipe Feb 23 19:33:39 crc kubenswrapper[4768]: I0223 19:33:39.544855 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:33:39 crc kubenswrapper[4768]: I0223 19:33:39.545225 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:34:09 crc kubenswrapper[4768]: I0223 19:34:09.545771 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:34:09 crc kubenswrapper[4768]: I0223 19:34:09.546829 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:34:09 crc kubenswrapper[4768]: I0223 19:34:09.546971 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 19:34:09 crc kubenswrapper[4768]: I0223 19:34:09.548281 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 19:34:09 crc kubenswrapper[4768]: I0223 19:34:09.548372 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" gracePeriod=600 Feb 23 19:34:09 crc kubenswrapper[4768]: E0223 19:34:09.681135 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:34:10 crc kubenswrapper[4768]: I0223 19:34:10.612056 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" exitCode=0 Feb 23 19:34:10 crc kubenswrapper[4768]: I0223 19:34:10.612134 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c"} Feb 23 19:34:10 crc kubenswrapper[4768]: I0223 19:34:10.612513 4768 scope.go:117] "RemoveContainer" containerID="b646a8c09da3b1a57c2765094d5b5177101d4658c9a8134db1d254dc4300ce3b" Feb 23 19:34:10 crc kubenswrapper[4768]: I0223 19:34:10.614653 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:34:10 crc kubenswrapper[4768]: E0223 19:34:10.615294 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:34:25 crc kubenswrapper[4768]: I0223 19:34:25.321907 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:34:25 crc kubenswrapper[4768]: E0223 19:34:25.324295 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:34:36 crc kubenswrapper[4768]: I0223 19:34:36.308515 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:34:36 crc kubenswrapper[4768]: E0223 19:34:36.309355 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:34:51 crc kubenswrapper[4768]: I0223 19:34:51.307884 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:34:51 crc kubenswrapper[4768]: E0223 19:34:51.308611 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:34:53 crc kubenswrapper[4768]: I0223 19:34:53.083957 4768 generic.go:334] "Generic (PLEG): container finished" podID="af07cf12-afdf-443f-8ee6-b20f9eb92269" containerID="ab9e7de2e951b2d1d1b306d79de04aba4719882a57bc28f924fc8c35ec254a42" exitCode=0 Feb 23 19:34:53 crc kubenswrapper[4768]: I0223 19:34:53.084084 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-nfkwh/must-gather-275s8" event={"ID":"af07cf12-afdf-443f-8ee6-b20f9eb92269","Type":"ContainerDied","Data":"ab9e7de2e951b2d1d1b306d79de04aba4719882a57bc28f924fc8c35ec254a42"} Feb 23 19:34:53 crc kubenswrapper[4768]: I0223 19:34:53.085301 4768 scope.go:117] "RemoveContainer" containerID="ab9e7de2e951b2d1d1b306d79de04aba4719882a57bc28f924fc8c35ec254a42" Feb 23 19:34:53 crc kubenswrapper[4768]: I0223 19:34:53.383900 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nfkwh_must-gather-275s8_af07cf12-afdf-443f-8ee6-b20f9eb92269/gather/0.log" Feb 23 19:35:01 crc kubenswrapper[4768]: I0223 19:35:01.461749 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-nfkwh/must-gather-275s8"] Feb 23 19:35:01 crc kubenswrapper[4768]: I0223 19:35:01.462651 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-nfkwh/must-gather-275s8" podUID="af07cf12-afdf-443f-8ee6-b20f9eb92269" containerName="copy" containerID="cri-o://76877552bbb496753c90bc7da0d58c939a598c1edd9004d49b17380069c5f822" gracePeriod=2 Feb 23 19:35:01 crc kubenswrapper[4768]: I0223 19:35:01.471293 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-nfkwh/must-gather-275s8"] Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:01.891596 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nfkwh_must-gather-275s8_af07cf12-afdf-443f-8ee6-b20f9eb92269/copy/0.log" Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:01.892162 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/must-gather-275s8" Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.048671 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af07cf12-afdf-443f-8ee6-b20f9eb92269-must-gather-output\") pod \"af07cf12-afdf-443f-8ee6-b20f9eb92269\" (UID: \"af07cf12-afdf-443f-8ee6-b20f9eb92269\") " Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.048937 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrx8m\" (UniqueName: \"kubernetes.io/projected/af07cf12-afdf-443f-8ee6-b20f9eb92269-kube-api-access-mrx8m\") pod \"af07cf12-afdf-443f-8ee6-b20f9eb92269\" (UID: \"af07cf12-afdf-443f-8ee6-b20f9eb92269\") " Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.054451 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af07cf12-afdf-443f-8ee6-b20f9eb92269-kube-api-access-mrx8m" (OuterVolumeSpecName: "kube-api-access-mrx8m") pod "af07cf12-afdf-443f-8ee6-b20f9eb92269" (UID: "af07cf12-afdf-443f-8ee6-b20f9eb92269"). InnerVolumeSpecName "kube-api-access-mrx8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.151767 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrx8m\" (UniqueName: \"kubernetes.io/projected/af07cf12-afdf-443f-8ee6-b20f9eb92269-kube-api-access-mrx8m\") on node \"crc\" DevicePath \"\"" Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.193640 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-nfkwh_must-gather-275s8_af07cf12-afdf-443f-8ee6-b20f9eb92269/copy/0.log" Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.194095 4768 generic.go:334] "Generic (PLEG): container finished" podID="af07cf12-afdf-443f-8ee6-b20f9eb92269" containerID="76877552bbb496753c90bc7da0d58c939a598c1edd9004d49b17380069c5f822" exitCode=143 Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.194155 4768 scope.go:117] "RemoveContainer" containerID="76877552bbb496753c90bc7da0d58c939a598c1edd9004d49b17380069c5f822" Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.194360 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-nfkwh/must-gather-275s8" Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.207495 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af07cf12-afdf-443f-8ee6-b20f9eb92269-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "af07cf12-afdf-443f-8ee6-b20f9eb92269" (UID: "af07cf12-afdf-443f-8ee6-b20f9eb92269"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.218215 4768 scope.go:117] "RemoveContainer" containerID="ab9e7de2e951b2d1d1b306d79de04aba4719882a57bc28f924fc8c35ec254a42" Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.250980 4768 scope.go:117] "RemoveContainer" containerID="76877552bbb496753c90bc7da0d58c939a598c1edd9004d49b17380069c5f822" Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.253325 4768 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/af07cf12-afdf-443f-8ee6-b20f9eb92269-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 23 19:35:02 crc kubenswrapper[4768]: E0223 19:35:02.255674 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76877552bbb496753c90bc7da0d58c939a598c1edd9004d49b17380069c5f822\": container with ID starting with 76877552bbb496753c90bc7da0d58c939a598c1edd9004d49b17380069c5f822 not found: ID does not exist" containerID="76877552bbb496753c90bc7da0d58c939a598c1edd9004d49b17380069c5f822" Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.255699 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76877552bbb496753c90bc7da0d58c939a598c1edd9004d49b17380069c5f822"} err="failed to get container status \"76877552bbb496753c90bc7da0d58c939a598c1edd9004d49b17380069c5f822\": rpc error: code = NotFound desc = could not find container \"76877552bbb496753c90bc7da0d58c939a598c1edd9004d49b17380069c5f822\": container with ID starting with 76877552bbb496753c90bc7da0d58c939a598c1edd9004d49b17380069c5f822 not found: ID does not exist" Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.255720 4768 scope.go:117] "RemoveContainer" containerID="ab9e7de2e951b2d1d1b306d79de04aba4719882a57bc28f924fc8c35ec254a42" Feb 23 19:35:02 crc kubenswrapper[4768]: E0223 19:35:02.256082 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab9e7de2e951b2d1d1b306d79de04aba4719882a57bc28f924fc8c35ec254a42\": container with ID starting with ab9e7de2e951b2d1d1b306d79de04aba4719882a57bc28f924fc8c35ec254a42 not found: ID does not exist" containerID="ab9e7de2e951b2d1d1b306d79de04aba4719882a57bc28f924fc8c35ec254a42" Feb 23 19:35:02 crc kubenswrapper[4768]: I0223 19:35:02.256130 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab9e7de2e951b2d1d1b306d79de04aba4719882a57bc28f924fc8c35ec254a42"} err="failed to get container status \"ab9e7de2e951b2d1d1b306d79de04aba4719882a57bc28f924fc8c35ec254a42\": rpc error: code = NotFound desc = could not find container \"ab9e7de2e951b2d1d1b306d79de04aba4719882a57bc28f924fc8c35ec254a42\": container with ID starting with ab9e7de2e951b2d1d1b306d79de04aba4719882a57bc28f924fc8c35ec254a42 not found: ID does not exist" Feb 23 19:35:03 crc kubenswrapper[4768]: I0223 19:35:03.323711 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af07cf12-afdf-443f-8ee6-b20f9eb92269" path="/var/lib/kubelet/pods/af07cf12-afdf-443f-8ee6-b20f9eb92269/volumes" Feb 23 19:35:05 crc kubenswrapper[4768]: I0223 19:35:05.323674 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:35:05 crc kubenswrapper[4768]: E0223 19:35:05.324588 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.307771 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:35:18 crc kubenswrapper[4768]: E0223 19:35:18.308473 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.554800 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xtnfz"] Feb 23 19:35:18 crc kubenswrapper[4768]: E0223 19:35:18.555150 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af07cf12-afdf-443f-8ee6-b20f9eb92269" containerName="copy" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.555167 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="af07cf12-afdf-443f-8ee6-b20f9eb92269" containerName="copy" Feb 23 19:35:18 crc kubenswrapper[4768]: E0223 19:35:18.555177 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4484bb17-b49b-40f3-841c-316f6b6a7555" containerName="extract-content" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.555184 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4484bb17-b49b-40f3-841c-316f6b6a7555" containerName="extract-content" Feb 23 19:35:18 crc kubenswrapper[4768]: E0223 19:35:18.555198 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4484bb17-b49b-40f3-841c-316f6b6a7555" containerName="extract-utilities" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.555204 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4484bb17-b49b-40f3-841c-316f6b6a7555" containerName="extract-utilities" Feb 23 19:35:18 crc kubenswrapper[4768]: E0223 19:35:18.555214 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e570c6f5-86d8-43b6-a6ad-b6b40f674055" containerName="extract-utilities" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.555219 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e570c6f5-86d8-43b6-a6ad-b6b40f674055" containerName="extract-utilities" Feb 23 19:35:18 crc kubenswrapper[4768]: E0223 19:35:18.555232 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4484bb17-b49b-40f3-841c-316f6b6a7555" containerName="registry-server" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.555238 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="4484bb17-b49b-40f3-841c-316f6b6a7555" containerName="registry-server" Feb 23 19:35:18 crc kubenswrapper[4768]: E0223 19:35:18.555263 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e570c6f5-86d8-43b6-a6ad-b6b40f674055" containerName="registry-server" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.555269 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e570c6f5-86d8-43b6-a6ad-b6b40f674055" containerName="registry-server" Feb 23 19:35:18 crc kubenswrapper[4768]: E0223 19:35:18.555280 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e570c6f5-86d8-43b6-a6ad-b6b40f674055" containerName="extract-content" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.555285 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="e570c6f5-86d8-43b6-a6ad-b6b40f674055" containerName="extract-content" Feb 23 19:35:18 crc kubenswrapper[4768]: E0223 19:35:18.555296 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af07cf12-afdf-443f-8ee6-b20f9eb92269" containerName="gather" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.555302 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="af07cf12-afdf-443f-8ee6-b20f9eb92269" containerName="gather" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.555494 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="4484bb17-b49b-40f3-841c-316f6b6a7555" containerName="registry-server" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.555506 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="e570c6f5-86d8-43b6-a6ad-b6b40f674055" containerName="registry-server" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.555522 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="af07cf12-afdf-443f-8ee6-b20f9eb92269" containerName="gather" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.555533 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="af07cf12-afdf-443f-8ee6-b20f9eb92269" containerName="copy" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.556734 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.590552 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xtnfz"] Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.732473 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/166b4e1d-327d-45d8-b244-422661a0bca0-utilities\") pod \"community-operators-xtnfz\" (UID: \"166b4e1d-327d-45d8-b244-422661a0bca0\") " pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.732526 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/166b4e1d-327d-45d8-b244-422661a0bca0-catalog-content\") pod \"community-operators-xtnfz\" (UID: \"166b4e1d-327d-45d8-b244-422661a0bca0\") " pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.732618 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2dp9\" (UniqueName: \"kubernetes.io/projected/166b4e1d-327d-45d8-b244-422661a0bca0-kube-api-access-g2dp9\") pod \"community-operators-xtnfz\" (UID: \"166b4e1d-327d-45d8-b244-422661a0bca0\") " pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.833843 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/166b4e1d-327d-45d8-b244-422661a0bca0-utilities\") pod \"community-operators-xtnfz\" (UID: \"166b4e1d-327d-45d8-b244-422661a0bca0\") " pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.833889 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/166b4e1d-327d-45d8-b244-422661a0bca0-catalog-content\") pod \"community-operators-xtnfz\" (UID: \"166b4e1d-327d-45d8-b244-422661a0bca0\") " pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.833933 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2dp9\" (UniqueName: \"kubernetes.io/projected/166b4e1d-327d-45d8-b244-422661a0bca0-kube-api-access-g2dp9\") pod \"community-operators-xtnfz\" (UID: \"166b4e1d-327d-45d8-b244-422661a0bca0\") " pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.834492 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/166b4e1d-327d-45d8-b244-422661a0bca0-utilities\") pod \"community-operators-xtnfz\" (UID: \"166b4e1d-327d-45d8-b244-422661a0bca0\") " pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.834873 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/166b4e1d-327d-45d8-b244-422661a0bca0-catalog-content\") pod \"community-operators-xtnfz\" (UID: \"166b4e1d-327d-45d8-b244-422661a0bca0\") " pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.855915 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2dp9\" (UniqueName: \"kubernetes.io/projected/166b4e1d-327d-45d8-b244-422661a0bca0-kube-api-access-g2dp9\") pod \"community-operators-xtnfz\" (UID: \"166b4e1d-327d-45d8-b244-422661a0bca0\") " pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:18 crc kubenswrapper[4768]: I0223 19:35:18.892498 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:19 crc kubenswrapper[4768]: I0223 19:35:19.490736 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xtnfz"] Feb 23 19:35:19 crc kubenswrapper[4768]: I0223 19:35:19.525010 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtnfz" event={"ID":"166b4e1d-327d-45d8-b244-422661a0bca0","Type":"ContainerStarted","Data":"c8b5f25582ea10726893d010993cb170971106f9868a10fa198ed4091cdcbcd5"} Feb 23 19:35:20 crc kubenswrapper[4768]: I0223 19:35:20.537041 4768 generic.go:334] "Generic (PLEG): container finished" podID="166b4e1d-327d-45d8-b244-422661a0bca0" containerID="864620b71dc52766745cbf24f8f176de409aeb82e429b7b28c6a5de4a8522b23" exitCode=0 Feb 23 19:35:20 crc kubenswrapper[4768]: I0223 19:35:20.537113 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtnfz" event={"ID":"166b4e1d-327d-45d8-b244-422661a0bca0","Type":"ContainerDied","Data":"864620b71dc52766745cbf24f8f176de409aeb82e429b7b28c6a5de4a8522b23"} Feb 23 19:35:20 crc kubenswrapper[4768]: I0223 19:35:20.540092 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 19:35:21 crc kubenswrapper[4768]: I0223 19:35:21.550845 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtnfz" event={"ID":"166b4e1d-327d-45d8-b244-422661a0bca0","Type":"ContainerStarted","Data":"7b0790db94ebbfdd29b3052fa23ea8eb75fc2a47d3f3cfb91c03d9679d069960"} Feb 23 19:35:22 crc kubenswrapper[4768]: I0223 19:35:22.565524 4768 generic.go:334] "Generic (PLEG): container finished" podID="166b4e1d-327d-45d8-b244-422661a0bca0" containerID="7b0790db94ebbfdd29b3052fa23ea8eb75fc2a47d3f3cfb91c03d9679d069960" exitCode=0 Feb 23 19:35:22 crc kubenswrapper[4768]: I0223 19:35:22.565601 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtnfz" event={"ID":"166b4e1d-327d-45d8-b244-422661a0bca0","Type":"ContainerDied","Data":"7b0790db94ebbfdd29b3052fa23ea8eb75fc2a47d3f3cfb91c03d9679d069960"} Feb 23 19:35:23 crc kubenswrapper[4768]: I0223 19:35:23.576183 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtnfz" event={"ID":"166b4e1d-327d-45d8-b244-422661a0bca0","Type":"ContainerStarted","Data":"c0e0d3a8dfa06ab4c84ce244f3d7eb077a614f625a778c479cf835e1300c1345"} Feb 23 19:35:23 crc kubenswrapper[4768]: I0223 19:35:23.606153 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xtnfz" podStartSLOduration=3.189571677 podStartE2EDuration="5.606131301s" podCreationTimestamp="2026-02-23 19:35:18 +0000 UTC" firstStartedPulling="2026-02-23 19:35:20.539811077 +0000 UTC m=+3715.930296877" lastFinishedPulling="2026-02-23 19:35:22.956370701 +0000 UTC m=+3718.346856501" observedRunningTime="2026-02-23 19:35:23.605230366 +0000 UTC m=+3718.995716166" watchObservedRunningTime="2026-02-23 19:35:23.606131301 +0000 UTC m=+3718.996617111" Feb 23 19:35:28 crc kubenswrapper[4768]: I0223 19:35:28.893407 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:28 crc kubenswrapper[4768]: I0223 19:35:28.898286 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:28 crc kubenswrapper[4768]: I0223 19:35:28.977619 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:29 crc kubenswrapper[4768]: I0223 19:35:29.689065 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:29 crc kubenswrapper[4768]: I0223 19:35:29.739853 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xtnfz"] Feb 23 19:35:30 crc kubenswrapper[4768]: I0223 19:35:30.308572 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:35:30 crc kubenswrapper[4768]: E0223 19:35:30.309115 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:35:31 crc kubenswrapper[4768]: I0223 19:35:31.656736 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xtnfz" podUID="166b4e1d-327d-45d8-b244-422661a0bca0" containerName="registry-server" containerID="cri-o://c0e0d3a8dfa06ab4c84ce244f3d7eb077a614f625a778c479cf835e1300c1345" gracePeriod=2 Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.178141 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.327190 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2dp9\" (UniqueName: \"kubernetes.io/projected/166b4e1d-327d-45d8-b244-422661a0bca0-kube-api-access-g2dp9\") pod \"166b4e1d-327d-45d8-b244-422661a0bca0\" (UID: \"166b4e1d-327d-45d8-b244-422661a0bca0\") " Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.327319 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/166b4e1d-327d-45d8-b244-422661a0bca0-utilities\") pod \"166b4e1d-327d-45d8-b244-422661a0bca0\" (UID: \"166b4e1d-327d-45d8-b244-422661a0bca0\") " Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.327548 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/166b4e1d-327d-45d8-b244-422661a0bca0-catalog-content\") pod \"166b4e1d-327d-45d8-b244-422661a0bca0\" (UID: \"166b4e1d-327d-45d8-b244-422661a0bca0\") " Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.328702 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/166b4e1d-327d-45d8-b244-422661a0bca0-utilities" (OuterVolumeSpecName: "utilities") pod "166b4e1d-327d-45d8-b244-422661a0bca0" (UID: "166b4e1d-327d-45d8-b244-422661a0bca0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.338906 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/166b4e1d-327d-45d8-b244-422661a0bca0-kube-api-access-g2dp9" (OuterVolumeSpecName: "kube-api-access-g2dp9") pod "166b4e1d-327d-45d8-b244-422661a0bca0" (UID: "166b4e1d-327d-45d8-b244-422661a0bca0"). InnerVolumeSpecName "kube-api-access-g2dp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.382087 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/166b4e1d-327d-45d8-b244-422661a0bca0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "166b4e1d-327d-45d8-b244-422661a0bca0" (UID: "166b4e1d-327d-45d8-b244-422661a0bca0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.430238 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2dp9\" (UniqueName: \"kubernetes.io/projected/166b4e1d-327d-45d8-b244-422661a0bca0-kube-api-access-g2dp9\") on node \"crc\" DevicePath \"\"" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.430282 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/166b4e1d-327d-45d8-b244-422661a0bca0-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.430292 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/166b4e1d-327d-45d8-b244-422661a0bca0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.675599 4768 generic.go:334] "Generic (PLEG): container finished" podID="166b4e1d-327d-45d8-b244-422661a0bca0" containerID="c0e0d3a8dfa06ab4c84ce244f3d7eb077a614f625a778c479cf835e1300c1345" exitCode=0 Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.675646 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xtnfz" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.675661 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtnfz" event={"ID":"166b4e1d-327d-45d8-b244-422661a0bca0","Type":"ContainerDied","Data":"c0e0d3a8dfa06ab4c84ce244f3d7eb077a614f625a778c479cf835e1300c1345"} Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.675704 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xtnfz" event={"ID":"166b4e1d-327d-45d8-b244-422661a0bca0","Type":"ContainerDied","Data":"c8b5f25582ea10726893d010993cb170971106f9868a10fa198ed4091cdcbcd5"} Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.675739 4768 scope.go:117] "RemoveContainer" containerID="c0e0d3a8dfa06ab4c84ce244f3d7eb077a614f625a778c479cf835e1300c1345" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.716557 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xtnfz"] Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.718922 4768 scope.go:117] "RemoveContainer" containerID="7b0790db94ebbfdd29b3052fa23ea8eb75fc2a47d3f3cfb91c03d9679d069960" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.732983 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xtnfz"] Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.750628 4768 scope.go:117] "RemoveContainer" containerID="864620b71dc52766745cbf24f8f176de409aeb82e429b7b28c6a5de4a8522b23" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.792887 4768 scope.go:117] "RemoveContainer" containerID="c0e0d3a8dfa06ab4c84ce244f3d7eb077a614f625a778c479cf835e1300c1345" Feb 23 19:35:32 crc kubenswrapper[4768]: E0223 19:35:32.793533 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e0d3a8dfa06ab4c84ce244f3d7eb077a614f625a778c479cf835e1300c1345\": container with ID starting with c0e0d3a8dfa06ab4c84ce244f3d7eb077a614f625a778c479cf835e1300c1345 not found: ID does not exist" containerID="c0e0d3a8dfa06ab4c84ce244f3d7eb077a614f625a778c479cf835e1300c1345" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.793590 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e0d3a8dfa06ab4c84ce244f3d7eb077a614f625a778c479cf835e1300c1345"} err="failed to get container status \"c0e0d3a8dfa06ab4c84ce244f3d7eb077a614f625a778c479cf835e1300c1345\": rpc error: code = NotFound desc = could not find container \"c0e0d3a8dfa06ab4c84ce244f3d7eb077a614f625a778c479cf835e1300c1345\": container with ID starting with c0e0d3a8dfa06ab4c84ce244f3d7eb077a614f625a778c479cf835e1300c1345 not found: ID does not exist" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.793629 4768 scope.go:117] "RemoveContainer" containerID="7b0790db94ebbfdd29b3052fa23ea8eb75fc2a47d3f3cfb91c03d9679d069960" Feb 23 19:35:32 crc kubenswrapper[4768]: E0223 19:35:32.794417 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b0790db94ebbfdd29b3052fa23ea8eb75fc2a47d3f3cfb91c03d9679d069960\": container with ID starting with 7b0790db94ebbfdd29b3052fa23ea8eb75fc2a47d3f3cfb91c03d9679d069960 not found: ID does not exist" containerID="7b0790db94ebbfdd29b3052fa23ea8eb75fc2a47d3f3cfb91c03d9679d069960" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.794472 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0790db94ebbfdd29b3052fa23ea8eb75fc2a47d3f3cfb91c03d9679d069960"} err="failed to get container status \"7b0790db94ebbfdd29b3052fa23ea8eb75fc2a47d3f3cfb91c03d9679d069960\": rpc error: code = NotFound desc = could not find container \"7b0790db94ebbfdd29b3052fa23ea8eb75fc2a47d3f3cfb91c03d9679d069960\": container with ID starting with 7b0790db94ebbfdd29b3052fa23ea8eb75fc2a47d3f3cfb91c03d9679d069960 not found: ID does not exist" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.794524 4768 scope.go:117] "RemoveContainer" containerID="864620b71dc52766745cbf24f8f176de409aeb82e429b7b28c6a5de4a8522b23" Feb 23 19:35:32 crc kubenswrapper[4768]: E0223 19:35:32.794986 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"864620b71dc52766745cbf24f8f176de409aeb82e429b7b28c6a5de4a8522b23\": container with ID starting with 864620b71dc52766745cbf24f8f176de409aeb82e429b7b28c6a5de4a8522b23 not found: ID does not exist" containerID="864620b71dc52766745cbf24f8f176de409aeb82e429b7b28c6a5de4a8522b23" Feb 23 19:35:32 crc kubenswrapper[4768]: I0223 19:35:32.795079 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864620b71dc52766745cbf24f8f176de409aeb82e429b7b28c6a5de4a8522b23"} err="failed to get container status \"864620b71dc52766745cbf24f8f176de409aeb82e429b7b28c6a5de4a8522b23\": rpc error: code = NotFound desc = could not find container \"864620b71dc52766745cbf24f8f176de409aeb82e429b7b28c6a5de4a8522b23\": container with ID starting with 864620b71dc52766745cbf24f8f176de409aeb82e429b7b28c6a5de4a8522b23 not found: ID does not exist" Feb 23 19:35:33 crc kubenswrapper[4768]: I0223 19:35:33.325841 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="166b4e1d-327d-45d8-b244-422661a0bca0" path="/var/lib/kubelet/pods/166b4e1d-327d-45d8-b244-422661a0bca0/volumes" Feb 23 19:35:42 crc kubenswrapper[4768]: I0223 19:35:42.307773 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:35:42 crc kubenswrapper[4768]: E0223 19:35:42.309040 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:35:57 crc kubenswrapper[4768]: I0223 19:35:57.307532 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:35:57 crc kubenswrapper[4768]: E0223 19:35:57.308337 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:36:09 crc kubenswrapper[4768]: I0223 19:36:09.308355 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:36:09 crc kubenswrapper[4768]: E0223 19:36:09.312185 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:36:22 crc kubenswrapper[4768]: I0223 19:36:22.308340 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:36:22 crc kubenswrapper[4768]: E0223 19:36:22.309193 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:36:33 crc kubenswrapper[4768]: I0223 19:36:33.312377 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:36:33 crc kubenswrapper[4768]: E0223 19:36:33.313466 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:36:34 crc kubenswrapper[4768]: I0223 19:36:34.818847 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r7dkv"] Feb 23 19:36:34 crc kubenswrapper[4768]: E0223 19:36:34.819699 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="166b4e1d-327d-45d8-b244-422661a0bca0" containerName="extract-content" Feb 23 19:36:34 crc kubenswrapper[4768]: I0223 19:36:34.819719 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="166b4e1d-327d-45d8-b244-422661a0bca0" containerName="extract-content" Feb 23 19:36:34 crc kubenswrapper[4768]: E0223 19:36:34.819765 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="166b4e1d-327d-45d8-b244-422661a0bca0" containerName="extract-utilities" Feb 23 19:36:34 crc kubenswrapper[4768]: I0223 19:36:34.819778 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="166b4e1d-327d-45d8-b244-422661a0bca0" containerName="extract-utilities" Feb 23 19:36:34 crc kubenswrapper[4768]: E0223 19:36:34.819802 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="166b4e1d-327d-45d8-b244-422661a0bca0" containerName="registry-server" Feb 23 19:36:34 crc kubenswrapper[4768]: I0223 19:36:34.819815 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="166b4e1d-327d-45d8-b244-422661a0bca0" containerName="registry-server" Feb 23 19:36:34 crc kubenswrapper[4768]: I0223 19:36:34.820111 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="166b4e1d-327d-45d8-b244-422661a0bca0" containerName="registry-server" Feb 23 19:36:34 crc kubenswrapper[4768]: I0223 19:36:34.822374 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:34 crc kubenswrapper[4768]: I0223 19:36:34.843711 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7dkv"] Feb 23 19:36:34 crc kubenswrapper[4768]: I0223 19:36:34.927824 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09c5f0bd-0580-4d46-a831-6b4d314583b3-utilities\") pod \"certified-operators-r7dkv\" (UID: \"09c5f0bd-0580-4d46-a831-6b4d314583b3\") " pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:34 crc kubenswrapper[4768]: I0223 19:36:34.927969 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09c5f0bd-0580-4d46-a831-6b4d314583b3-catalog-content\") pod \"certified-operators-r7dkv\" (UID: \"09c5f0bd-0580-4d46-a831-6b4d314583b3\") " pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:34 crc kubenswrapper[4768]: I0223 19:36:34.928023 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfxq4\" (UniqueName: \"kubernetes.io/projected/09c5f0bd-0580-4d46-a831-6b4d314583b3-kube-api-access-sfxq4\") pod \"certified-operators-r7dkv\" (UID: \"09c5f0bd-0580-4d46-a831-6b4d314583b3\") " pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:35 crc kubenswrapper[4768]: I0223 19:36:35.030039 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09c5f0bd-0580-4d46-a831-6b4d314583b3-utilities\") pod \"certified-operators-r7dkv\" (UID: \"09c5f0bd-0580-4d46-a831-6b4d314583b3\") " pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:35 crc kubenswrapper[4768]: I0223 19:36:35.030142 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09c5f0bd-0580-4d46-a831-6b4d314583b3-catalog-content\") pod \"certified-operators-r7dkv\" (UID: \"09c5f0bd-0580-4d46-a831-6b4d314583b3\") " pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:35 crc kubenswrapper[4768]: I0223 19:36:35.030177 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfxq4\" (UniqueName: \"kubernetes.io/projected/09c5f0bd-0580-4d46-a831-6b4d314583b3-kube-api-access-sfxq4\") pod \"certified-operators-r7dkv\" (UID: \"09c5f0bd-0580-4d46-a831-6b4d314583b3\") " pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:35 crc kubenswrapper[4768]: I0223 19:36:35.030863 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09c5f0bd-0580-4d46-a831-6b4d314583b3-utilities\") pod \"certified-operators-r7dkv\" (UID: \"09c5f0bd-0580-4d46-a831-6b4d314583b3\") " pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:35 crc kubenswrapper[4768]: I0223 19:36:35.030892 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09c5f0bd-0580-4d46-a831-6b4d314583b3-catalog-content\") pod \"certified-operators-r7dkv\" (UID: \"09c5f0bd-0580-4d46-a831-6b4d314583b3\") " pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:35 crc kubenswrapper[4768]: I0223 19:36:35.055378 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfxq4\" (UniqueName: \"kubernetes.io/projected/09c5f0bd-0580-4d46-a831-6b4d314583b3-kube-api-access-sfxq4\") pod \"certified-operators-r7dkv\" (UID: \"09c5f0bd-0580-4d46-a831-6b4d314583b3\") " pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:35 crc kubenswrapper[4768]: I0223 19:36:35.158200 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:35 crc kubenswrapper[4768]: I0223 19:36:35.649367 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r7dkv"] Feb 23 19:36:36 crc kubenswrapper[4768]: I0223 19:36:36.384014 4768 generic.go:334] "Generic (PLEG): container finished" podID="09c5f0bd-0580-4d46-a831-6b4d314583b3" containerID="cbba6c6075661723044ae39dd15d4c33abb2438dc5bbb8453a8bbb1788fc04cc" exitCode=0 Feb 23 19:36:36 crc kubenswrapper[4768]: I0223 19:36:36.384069 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7dkv" event={"ID":"09c5f0bd-0580-4d46-a831-6b4d314583b3","Type":"ContainerDied","Data":"cbba6c6075661723044ae39dd15d4c33abb2438dc5bbb8453a8bbb1788fc04cc"} Feb 23 19:36:36 crc kubenswrapper[4768]: I0223 19:36:36.384426 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7dkv" event={"ID":"09c5f0bd-0580-4d46-a831-6b4d314583b3","Type":"ContainerStarted","Data":"fdd835b7d7bb103514ae58d7a2118d604beb90adf846b3e87036f462437145c6"} Feb 23 19:36:38 crc kubenswrapper[4768]: I0223 19:36:38.405838 4768 generic.go:334] "Generic (PLEG): container finished" podID="09c5f0bd-0580-4d46-a831-6b4d314583b3" containerID="ba83fd5ee97536da85f5ed5a27532d5a7b38fc32fefb150bc8f4963079e83b19" exitCode=0 Feb 23 19:36:38 crc kubenswrapper[4768]: I0223 19:36:38.405907 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7dkv" event={"ID":"09c5f0bd-0580-4d46-a831-6b4d314583b3","Type":"ContainerDied","Data":"ba83fd5ee97536da85f5ed5a27532d5a7b38fc32fefb150bc8f4963079e83b19"} Feb 23 19:36:39 crc kubenswrapper[4768]: I0223 19:36:39.431957 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7dkv" event={"ID":"09c5f0bd-0580-4d46-a831-6b4d314583b3","Type":"ContainerStarted","Data":"a4d7565518f9c3f9358ff62c334b305d9a062ff2b238d7a32ec8508f2bee4348"} Feb 23 19:36:39 crc kubenswrapper[4768]: I0223 19:36:39.459809 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r7dkv" podStartSLOduration=2.996504838 podStartE2EDuration="5.459775148s" podCreationTimestamp="2026-02-23 19:36:34 +0000 UTC" firstStartedPulling="2026-02-23 19:36:36.38553982 +0000 UTC m=+3791.776025640" lastFinishedPulling="2026-02-23 19:36:38.84881012 +0000 UTC m=+3794.239295950" observedRunningTime="2026-02-23 19:36:39.458735711 +0000 UTC m=+3794.849221561" watchObservedRunningTime="2026-02-23 19:36:39.459775148 +0000 UTC m=+3794.850261038" Feb 23 19:36:42 crc kubenswrapper[4768]: I0223 19:36:42.074409 4768 scope.go:117] "RemoveContainer" containerID="600f72ccae170763cb4e11674e0fa9d7c150ee3b4326a170137786aefd30586b" Feb 23 19:36:45 crc kubenswrapper[4768]: I0223 19:36:45.159151 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:45 crc kubenswrapper[4768]: I0223 19:36:45.159796 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:45 crc kubenswrapper[4768]: I0223 19:36:45.243737 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:45 crc kubenswrapper[4768]: I0223 19:36:45.320571 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:36:45 crc kubenswrapper[4768]: E0223 19:36:45.321141 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:36:45 crc kubenswrapper[4768]: I0223 19:36:45.589096 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:45 crc kubenswrapper[4768]: I0223 19:36:45.673569 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7dkv"] Feb 23 19:36:47 crc kubenswrapper[4768]: I0223 19:36:47.523714 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r7dkv" podUID="09c5f0bd-0580-4d46-a831-6b4d314583b3" containerName="registry-server" containerID="cri-o://a4d7565518f9c3f9358ff62c334b305d9a062ff2b238d7a32ec8508f2bee4348" gracePeriod=2 Feb 23 19:36:47 crc kubenswrapper[4768]: I0223 19:36:47.966559 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.132353 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09c5f0bd-0580-4d46-a831-6b4d314583b3-catalog-content\") pod \"09c5f0bd-0580-4d46-a831-6b4d314583b3\" (UID: \"09c5f0bd-0580-4d46-a831-6b4d314583b3\") " Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.132406 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09c5f0bd-0580-4d46-a831-6b4d314583b3-utilities\") pod \"09c5f0bd-0580-4d46-a831-6b4d314583b3\" (UID: \"09c5f0bd-0580-4d46-a831-6b4d314583b3\") " Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.132637 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfxq4\" (UniqueName: \"kubernetes.io/projected/09c5f0bd-0580-4d46-a831-6b4d314583b3-kube-api-access-sfxq4\") pod \"09c5f0bd-0580-4d46-a831-6b4d314583b3\" (UID: \"09c5f0bd-0580-4d46-a831-6b4d314583b3\") " Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.133857 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09c5f0bd-0580-4d46-a831-6b4d314583b3-utilities" (OuterVolumeSpecName: "utilities") pod "09c5f0bd-0580-4d46-a831-6b4d314583b3" (UID: "09c5f0bd-0580-4d46-a831-6b4d314583b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.145462 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09c5f0bd-0580-4d46-a831-6b4d314583b3-kube-api-access-sfxq4" (OuterVolumeSpecName: "kube-api-access-sfxq4") pod "09c5f0bd-0580-4d46-a831-6b4d314583b3" (UID: "09c5f0bd-0580-4d46-a831-6b4d314583b3"). InnerVolumeSpecName "kube-api-access-sfxq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.234598 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09c5f0bd-0580-4d46-a831-6b4d314583b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.234629 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfxq4\" (UniqueName: \"kubernetes.io/projected/09c5f0bd-0580-4d46-a831-6b4d314583b3-kube-api-access-sfxq4\") on node \"crc\" DevicePath \"\"" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.489741 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09c5f0bd-0580-4d46-a831-6b4d314583b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "09c5f0bd-0580-4d46-a831-6b4d314583b3" (UID: "09c5f0bd-0580-4d46-a831-6b4d314583b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.532498 4768 generic.go:334] "Generic (PLEG): container finished" podID="09c5f0bd-0580-4d46-a831-6b4d314583b3" containerID="a4d7565518f9c3f9358ff62c334b305d9a062ff2b238d7a32ec8508f2bee4348" exitCode=0 Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.532548 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7dkv" event={"ID":"09c5f0bd-0580-4d46-a831-6b4d314583b3","Type":"ContainerDied","Data":"a4d7565518f9c3f9358ff62c334b305d9a062ff2b238d7a32ec8508f2bee4348"} Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.532562 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r7dkv" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.532586 4768 scope.go:117] "RemoveContainer" containerID="a4d7565518f9c3f9358ff62c334b305d9a062ff2b238d7a32ec8508f2bee4348" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.532574 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r7dkv" event={"ID":"09c5f0bd-0580-4d46-a831-6b4d314583b3","Type":"ContainerDied","Data":"fdd835b7d7bb103514ae58d7a2118d604beb90adf846b3e87036f462437145c6"} Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.540414 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09c5f0bd-0580-4d46-a831-6b4d314583b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.552944 4768 scope.go:117] "RemoveContainer" containerID="ba83fd5ee97536da85f5ed5a27532d5a7b38fc32fefb150bc8f4963079e83b19" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.577721 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r7dkv"] Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.591052 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r7dkv"] Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.594960 4768 scope.go:117] "RemoveContainer" containerID="cbba6c6075661723044ae39dd15d4c33abb2438dc5bbb8453a8bbb1788fc04cc" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.650348 4768 scope.go:117] "RemoveContainer" containerID="a4d7565518f9c3f9358ff62c334b305d9a062ff2b238d7a32ec8508f2bee4348" Feb 23 19:36:48 crc kubenswrapper[4768]: E0223 19:36:48.650909 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4d7565518f9c3f9358ff62c334b305d9a062ff2b238d7a32ec8508f2bee4348\": container with ID starting with a4d7565518f9c3f9358ff62c334b305d9a062ff2b238d7a32ec8508f2bee4348 not found: ID does not exist" containerID="a4d7565518f9c3f9358ff62c334b305d9a062ff2b238d7a32ec8508f2bee4348" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.650994 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4d7565518f9c3f9358ff62c334b305d9a062ff2b238d7a32ec8508f2bee4348"} err="failed to get container status \"a4d7565518f9c3f9358ff62c334b305d9a062ff2b238d7a32ec8508f2bee4348\": rpc error: code = NotFound desc = could not find container \"a4d7565518f9c3f9358ff62c334b305d9a062ff2b238d7a32ec8508f2bee4348\": container with ID starting with a4d7565518f9c3f9358ff62c334b305d9a062ff2b238d7a32ec8508f2bee4348 not found: ID does not exist" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.651103 4768 scope.go:117] "RemoveContainer" containerID="ba83fd5ee97536da85f5ed5a27532d5a7b38fc32fefb150bc8f4963079e83b19" Feb 23 19:36:48 crc kubenswrapper[4768]: E0223 19:36:48.652423 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba83fd5ee97536da85f5ed5a27532d5a7b38fc32fefb150bc8f4963079e83b19\": container with ID starting with ba83fd5ee97536da85f5ed5a27532d5a7b38fc32fefb150bc8f4963079e83b19 not found: ID does not exist" containerID="ba83fd5ee97536da85f5ed5a27532d5a7b38fc32fefb150bc8f4963079e83b19" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.652455 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba83fd5ee97536da85f5ed5a27532d5a7b38fc32fefb150bc8f4963079e83b19"} err="failed to get container status \"ba83fd5ee97536da85f5ed5a27532d5a7b38fc32fefb150bc8f4963079e83b19\": rpc error: code = NotFound desc = could not find container \"ba83fd5ee97536da85f5ed5a27532d5a7b38fc32fefb150bc8f4963079e83b19\": container with ID starting with ba83fd5ee97536da85f5ed5a27532d5a7b38fc32fefb150bc8f4963079e83b19 not found: ID does not exist" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.652473 4768 scope.go:117] "RemoveContainer" containerID="cbba6c6075661723044ae39dd15d4c33abb2438dc5bbb8453a8bbb1788fc04cc" Feb 23 19:36:48 crc kubenswrapper[4768]: E0223 19:36:48.652771 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbba6c6075661723044ae39dd15d4c33abb2438dc5bbb8453a8bbb1788fc04cc\": container with ID starting with cbba6c6075661723044ae39dd15d4c33abb2438dc5bbb8453a8bbb1788fc04cc not found: ID does not exist" containerID="cbba6c6075661723044ae39dd15d4c33abb2438dc5bbb8453a8bbb1788fc04cc" Feb 23 19:36:48 crc kubenswrapper[4768]: I0223 19:36:48.652805 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbba6c6075661723044ae39dd15d4c33abb2438dc5bbb8453a8bbb1788fc04cc"} err="failed to get container status \"cbba6c6075661723044ae39dd15d4c33abb2438dc5bbb8453a8bbb1788fc04cc\": rpc error: code = NotFound desc = could not find container \"cbba6c6075661723044ae39dd15d4c33abb2438dc5bbb8453a8bbb1788fc04cc\": container with ID starting with cbba6c6075661723044ae39dd15d4c33abb2438dc5bbb8453a8bbb1788fc04cc not found: ID does not exist" Feb 23 19:36:49 crc kubenswrapper[4768]: I0223 19:36:49.322006 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09c5f0bd-0580-4d46-a831-6b4d314583b3" path="/var/lib/kubelet/pods/09c5f0bd-0580-4d46-a831-6b4d314583b3/volumes" Feb 23 19:36:58 crc kubenswrapper[4768]: I0223 19:36:58.307648 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:36:58 crc kubenswrapper[4768]: E0223 19:36:58.308570 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:37:11 crc kubenswrapper[4768]: I0223 19:37:11.308820 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:37:11 crc kubenswrapper[4768]: E0223 19:37:11.310050 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:37:24 crc kubenswrapper[4768]: I0223 19:37:24.308609 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:37:24 crc kubenswrapper[4768]: E0223 19:37:24.309737 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:37:37 crc kubenswrapper[4768]: I0223 19:37:37.308057 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:37:37 crc kubenswrapper[4768]: E0223 19:37:37.309501 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:37:50 crc kubenswrapper[4768]: I0223 19:37:50.307981 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:37:50 crc kubenswrapper[4768]: E0223 19:37:50.310140 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:38:02 crc kubenswrapper[4768]: I0223 19:38:02.308664 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:38:02 crc kubenswrapper[4768]: E0223 19:38:02.310505 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:38:07 crc kubenswrapper[4768]: I0223 19:38:07.904085 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sbhb5/must-gather-bbthh"] Feb 23 19:38:07 crc kubenswrapper[4768]: E0223 19:38:07.915436 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c5f0bd-0580-4d46-a831-6b4d314583b3" containerName="extract-content" Feb 23 19:38:07 crc kubenswrapper[4768]: I0223 19:38:07.915468 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c5f0bd-0580-4d46-a831-6b4d314583b3" containerName="extract-content" Feb 23 19:38:07 crc kubenswrapper[4768]: E0223 19:38:07.915482 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c5f0bd-0580-4d46-a831-6b4d314583b3" containerName="registry-server" Feb 23 19:38:07 crc kubenswrapper[4768]: I0223 19:38:07.915488 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c5f0bd-0580-4d46-a831-6b4d314583b3" containerName="registry-server" Feb 23 19:38:07 crc kubenswrapper[4768]: E0223 19:38:07.915510 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09c5f0bd-0580-4d46-a831-6b4d314583b3" containerName="extract-utilities" Feb 23 19:38:07 crc kubenswrapper[4768]: I0223 19:38:07.915516 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="09c5f0bd-0580-4d46-a831-6b4d314583b3" containerName="extract-utilities" Feb 23 19:38:07 crc kubenswrapper[4768]: I0223 19:38:07.915744 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c5f0bd-0580-4d46-a831-6b4d314583b3" containerName="registry-server" Feb 23 19:38:07 crc kubenswrapper[4768]: I0223 19:38:07.916602 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sbhb5/must-gather-bbthh"] Feb 23 19:38:07 crc kubenswrapper[4768]: I0223 19:38:07.916684 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/must-gather-bbthh" Feb 23 19:38:07 crc kubenswrapper[4768]: I0223 19:38:07.918813 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-sbhb5"/"default-dockercfg-r2zdz" Feb 23 19:38:07 crc kubenswrapper[4768]: I0223 19:38:07.921473 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sbhb5"/"kube-root-ca.crt" Feb 23 19:38:07 crc kubenswrapper[4768]: I0223 19:38:07.921495 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sbhb5"/"openshift-service-ca.crt" Feb 23 19:38:08 crc kubenswrapper[4768]: I0223 19:38:08.039852 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmxh4\" (UniqueName: \"kubernetes.io/projected/7ef14749-73b8-4e85-b19b-81633da7d903-kube-api-access-kmxh4\") pod \"must-gather-bbthh\" (UID: \"7ef14749-73b8-4e85-b19b-81633da7d903\") " pod="openshift-must-gather-sbhb5/must-gather-bbthh" Feb 23 19:38:08 crc kubenswrapper[4768]: I0223 19:38:08.040015 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7ef14749-73b8-4e85-b19b-81633da7d903-must-gather-output\") pod \"must-gather-bbthh\" (UID: \"7ef14749-73b8-4e85-b19b-81633da7d903\") " pod="openshift-must-gather-sbhb5/must-gather-bbthh" Feb 23 19:38:08 crc kubenswrapper[4768]: I0223 19:38:08.142164 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmxh4\" (UniqueName: \"kubernetes.io/projected/7ef14749-73b8-4e85-b19b-81633da7d903-kube-api-access-kmxh4\") pod \"must-gather-bbthh\" (UID: \"7ef14749-73b8-4e85-b19b-81633da7d903\") " pod="openshift-must-gather-sbhb5/must-gather-bbthh" Feb 23 19:38:08 crc kubenswrapper[4768]: I0223 19:38:08.142300 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7ef14749-73b8-4e85-b19b-81633da7d903-must-gather-output\") pod \"must-gather-bbthh\" (UID: \"7ef14749-73b8-4e85-b19b-81633da7d903\") " pod="openshift-must-gather-sbhb5/must-gather-bbthh" Feb 23 19:38:08 crc kubenswrapper[4768]: I0223 19:38:08.142779 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7ef14749-73b8-4e85-b19b-81633da7d903-must-gather-output\") pod \"must-gather-bbthh\" (UID: \"7ef14749-73b8-4e85-b19b-81633da7d903\") " pod="openshift-must-gather-sbhb5/must-gather-bbthh" Feb 23 19:38:08 crc kubenswrapper[4768]: I0223 19:38:08.169385 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmxh4\" (UniqueName: \"kubernetes.io/projected/7ef14749-73b8-4e85-b19b-81633da7d903-kube-api-access-kmxh4\") pod \"must-gather-bbthh\" (UID: \"7ef14749-73b8-4e85-b19b-81633da7d903\") " pod="openshift-must-gather-sbhb5/must-gather-bbthh" Feb 23 19:38:08 crc kubenswrapper[4768]: I0223 19:38:08.235838 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/must-gather-bbthh" Feb 23 19:38:08 crc kubenswrapper[4768]: I0223 19:38:08.764891 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sbhb5/must-gather-bbthh"] Feb 23 19:38:09 crc kubenswrapper[4768]: I0223 19:38:09.510362 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbhb5/must-gather-bbthh" event={"ID":"7ef14749-73b8-4e85-b19b-81633da7d903","Type":"ContainerStarted","Data":"994b96a3cce5c8e63ffdd79dc036e41dca233268e6245da455da174f28487719"} Feb 23 19:38:09 crc kubenswrapper[4768]: I0223 19:38:09.510760 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbhb5/must-gather-bbthh" event={"ID":"7ef14749-73b8-4e85-b19b-81633da7d903","Type":"ContainerStarted","Data":"f7346f3dc56c92437e38e7563c709a992334cf9dfd7eb4e39063fb258867caba"} Feb 23 19:38:09 crc kubenswrapper[4768]: I0223 19:38:09.510772 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbhb5/must-gather-bbthh" event={"ID":"7ef14749-73b8-4e85-b19b-81633da7d903","Type":"ContainerStarted","Data":"68227e467e0cad7be060cbf8a2fa5129f6476b75a6b74faff69b90e2f060fa81"} Feb 23 19:38:09 crc kubenswrapper[4768]: I0223 19:38:09.544820 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sbhb5/must-gather-bbthh" podStartSLOduration=2.544805926 podStartE2EDuration="2.544805926s" podCreationTimestamp="2026-02-23 19:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 19:38:09.543534182 +0000 UTC m=+3884.934019982" watchObservedRunningTime="2026-02-23 19:38:09.544805926 +0000 UTC m=+3884.935291726" Feb 23 19:38:12 crc kubenswrapper[4768]: I0223 19:38:12.617644 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sbhb5/crc-debug-hk87f"] Feb 23 19:38:12 crc kubenswrapper[4768]: I0223 19:38:12.619346 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/crc-debug-hk87f" Feb 23 19:38:12 crc kubenswrapper[4768]: I0223 19:38:12.726797 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/875325fa-da77-439c-a9e1-aa3861b701b8-host\") pod \"crc-debug-hk87f\" (UID: \"875325fa-da77-439c-a9e1-aa3861b701b8\") " pod="openshift-must-gather-sbhb5/crc-debug-hk87f" Feb 23 19:38:12 crc kubenswrapper[4768]: I0223 19:38:12.727063 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnvd6\" (UniqueName: \"kubernetes.io/projected/875325fa-da77-439c-a9e1-aa3861b701b8-kube-api-access-mnvd6\") pod \"crc-debug-hk87f\" (UID: \"875325fa-da77-439c-a9e1-aa3861b701b8\") " pod="openshift-must-gather-sbhb5/crc-debug-hk87f" Feb 23 19:38:12 crc kubenswrapper[4768]: I0223 19:38:12.829341 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/875325fa-da77-439c-a9e1-aa3861b701b8-host\") pod \"crc-debug-hk87f\" (UID: \"875325fa-da77-439c-a9e1-aa3861b701b8\") " pod="openshift-must-gather-sbhb5/crc-debug-hk87f" Feb 23 19:38:12 crc kubenswrapper[4768]: I0223 19:38:12.829502 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/875325fa-da77-439c-a9e1-aa3861b701b8-host\") pod \"crc-debug-hk87f\" (UID: \"875325fa-da77-439c-a9e1-aa3861b701b8\") " pod="openshift-must-gather-sbhb5/crc-debug-hk87f" Feb 23 19:38:12 crc kubenswrapper[4768]: I0223 19:38:12.829652 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnvd6\" (UniqueName: \"kubernetes.io/projected/875325fa-da77-439c-a9e1-aa3861b701b8-kube-api-access-mnvd6\") pod \"crc-debug-hk87f\" (UID: \"875325fa-da77-439c-a9e1-aa3861b701b8\") " pod="openshift-must-gather-sbhb5/crc-debug-hk87f" Feb 23 19:38:12 crc kubenswrapper[4768]: I0223 19:38:12.848406 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnvd6\" (UniqueName: \"kubernetes.io/projected/875325fa-da77-439c-a9e1-aa3861b701b8-kube-api-access-mnvd6\") pod \"crc-debug-hk87f\" (UID: \"875325fa-da77-439c-a9e1-aa3861b701b8\") " pod="openshift-must-gather-sbhb5/crc-debug-hk87f" Feb 23 19:38:12 crc kubenswrapper[4768]: I0223 19:38:12.937607 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/crc-debug-hk87f" Feb 23 19:38:12 crc kubenswrapper[4768]: W0223 19:38:12.963862 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod875325fa_da77_439c_a9e1_aa3861b701b8.slice/crio-466e310b96556e3518c629bf43569764c5b3f01b5d8b06f32d366c73f4a9dff2 WatchSource:0}: Error finding container 466e310b96556e3518c629bf43569764c5b3f01b5d8b06f32d366c73f4a9dff2: Status 404 returned error can't find the container with id 466e310b96556e3518c629bf43569764c5b3f01b5d8b06f32d366c73f4a9dff2 Feb 23 19:38:13 crc kubenswrapper[4768]: I0223 19:38:13.556326 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbhb5/crc-debug-hk87f" event={"ID":"875325fa-da77-439c-a9e1-aa3861b701b8","Type":"ContainerStarted","Data":"b7b946954edbb096a8c8c07a9a71f70dd0c11ad6a34fee5be327e2776c1e908c"} Feb 23 19:38:13 crc kubenswrapper[4768]: I0223 19:38:13.556652 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbhb5/crc-debug-hk87f" event={"ID":"875325fa-da77-439c-a9e1-aa3861b701b8","Type":"ContainerStarted","Data":"466e310b96556e3518c629bf43569764c5b3f01b5d8b06f32d366c73f4a9dff2"} Feb 23 19:38:13 crc kubenswrapper[4768]: I0223 19:38:13.577876 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sbhb5/crc-debug-hk87f" podStartSLOduration=1.57785153 podStartE2EDuration="1.57785153s" podCreationTimestamp="2026-02-23 19:38:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-23 19:38:13.572239548 +0000 UTC m=+3888.962725378" watchObservedRunningTime="2026-02-23 19:38:13.57785153 +0000 UTC m=+3888.968337350" Feb 23 19:38:17 crc kubenswrapper[4768]: I0223 19:38:17.308562 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:38:17 crc kubenswrapper[4768]: E0223 19:38:17.309604 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:38:28 crc kubenswrapper[4768]: I0223 19:38:28.307398 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:38:28 crc kubenswrapper[4768]: E0223 19:38:28.308210 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:38:43 crc kubenswrapper[4768]: I0223 19:38:43.307475 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:38:43 crc kubenswrapper[4768]: E0223 19:38:43.309275 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:38:44 crc kubenswrapper[4768]: I0223 19:38:44.809443 4768 generic.go:334] "Generic (PLEG): container finished" podID="875325fa-da77-439c-a9e1-aa3861b701b8" containerID="b7b946954edbb096a8c8c07a9a71f70dd0c11ad6a34fee5be327e2776c1e908c" exitCode=0 Feb 23 19:38:44 crc kubenswrapper[4768]: I0223 19:38:44.809549 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbhb5/crc-debug-hk87f" event={"ID":"875325fa-da77-439c-a9e1-aa3861b701b8","Type":"ContainerDied","Data":"b7b946954edbb096a8c8c07a9a71f70dd0c11ad6a34fee5be327e2776c1e908c"} Feb 23 19:38:45 crc kubenswrapper[4768]: I0223 19:38:45.961008 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/crc-debug-hk87f" Feb 23 19:38:45 crc kubenswrapper[4768]: I0223 19:38:45.998162 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sbhb5/crc-debug-hk87f"] Feb 23 19:38:46 crc kubenswrapper[4768]: I0223 19:38:46.006846 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sbhb5/crc-debug-hk87f"] Feb 23 19:38:46 crc kubenswrapper[4768]: I0223 19:38:46.123607 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/875325fa-da77-439c-a9e1-aa3861b701b8-host\") pod \"875325fa-da77-439c-a9e1-aa3861b701b8\" (UID: \"875325fa-da77-439c-a9e1-aa3861b701b8\") " Feb 23 19:38:46 crc kubenswrapper[4768]: I0223 19:38:46.123700 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnvd6\" (UniqueName: \"kubernetes.io/projected/875325fa-da77-439c-a9e1-aa3861b701b8-kube-api-access-mnvd6\") pod \"875325fa-da77-439c-a9e1-aa3861b701b8\" (UID: \"875325fa-da77-439c-a9e1-aa3861b701b8\") " Feb 23 19:38:46 crc kubenswrapper[4768]: I0223 19:38:46.123757 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/875325fa-da77-439c-a9e1-aa3861b701b8-host" (OuterVolumeSpecName: "host") pod "875325fa-da77-439c-a9e1-aa3861b701b8" (UID: "875325fa-da77-439c-a9e1-aa3861b701b8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 19:38:46 crc kubenswrapper[4768]: I0223 19:38:46.124417 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/875325fa-da77-439c-a9e1-aa3861b701b8-host\") on node \"crc\" DevicePath \"\"" Feb 23 19:38:46 crc kubenswrapper[4768]: I0223 19:38:46.130120 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875325fa-da77-439c-a9e1-aa3861b701b8-kube-api-access-mnvd6" (OuterVolumeSpecName: "kube-api-access-mnvd6") pod "875325fa-da77-439c-a9e1-aa3861b701b8" (UID: "875325fa-da77-439c-a9e1-aa3861b701b8"). InnerVolumeSpecName "kube-api-access-mnvd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:38:46 crc kubenswrapper[4768]: I0223 19:38:46.225962 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnvd6\" (UniqueName: \"kubernetes.io/projected/875325fa-da77-439c-a9e1-aa3861b701b8-kube-api-access-mnvd6\") on node \"crc\" DevicePath \"\"" Feb 23 19:38:46 crc kubenswrapper[4768]: I0223 19:38:46.830026 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="466e310b96556e3518c629bf43569764c5b3f01b5d8b06f32d366c73f4a9dff2" Feb 23 19:38:46 crc kubenswrapper[4768]: I0223 19:38:46.830079 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/crc-debug-hk87f" Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.201299 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sbhb5/crc-debug-n5t5c"] Feb 23 19:38:47 crc kubenswrapper[4768]: E0223 19:38:47.201822 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875325fa-da77-439c-a9e1-aa3861b701b8" containerName="container-00" Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.201838 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="875325fa-da77-439c-a9e1-aa3861b701b8" containerName="container-00" Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.202070 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="875325fa-da77-439c-a9e1-aa3861b701b8" containerName="container-00" Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.202879 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/crc-debug-n5t5c" Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.246779 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf6s9\" (UniqueName: \"kubernetes.io/projected/9ff2beab-33ee-4e4e-b1d6-b4eada98ef32-kube-api-access-gf6s9\") pod \"crc-debug-n5t5c\" (UID: \"9ff2beab-33ee-4e4e-b1d6-b4eada98ef32\") " pod="openshift-must-gather-sbhb5/crc-debug-n5t5c" Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.246886 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ff2beab-33ee-4e4e-b1d6-b4eada98ef32-host\") pod \"crc-debug-n5t5c\" (UID: \"9ff2beab-33ee-4e4e-b1d6-b4eada98ef32\") " pod="openshift-must-gather-sbhb5/crc-debug-n5t5c" Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.322179 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="875325fa-da77-439c-a9e1-aa3861b701b8" path="/var/lib/kubelet/pods/875325fa-da77-439c-a9e1-aa3861b701b8/volumes" Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.348761 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gf6s9\" (UniqueName: \"kubernetes.io/projected/9ff2beab-33ee-4e4e-b1d6-b4eada98ef32-kube-api-access-gf6s9\") pod \"crc-debug-n5t5c\" (UID: \"9ff2beab-33ee-4e4e-b1d6-b4eada98ef32\") " pod="openshift-must-gather-sbhb5/crc-debug-n5t5c" Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.348818 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ff2beab-33ee-4e4e-b1d6-b4eada98ef32-host\") pod \"crc-debug-n5t5c\" (UID: \"9ff2beab-33ee-4e4e-b1d6-b4eada98ef32\") " pod="openshift-must-gather-sbhb5/crc-debug-n5t5c" Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.349086 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ff2beab-33ee-4e4e-b1d6-b4eada98ef32-host\") pod \"crc-debug-n5t5c\" (UID: \"9ff2beab-33ee-4e4e-b1d6-b4eada98ef32\") " pod="openshift-must-gather-sbhb5/crc-debug-n5t5c" Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.373836 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gf6s9\" (UniqueName: \"kubernetes.io/projected/9ff2beab-33ee-4e4e-b1d6-b4eada98ef32-kube-api-access-gf6s9\") pod \"crc-debug-n5t5c\" (UID: \"9ff2beab-33ee-4e4e-b1d6-b4eada98ef32\") " pod="openshift-must-gather-sbhb5/crc-debug-n5t5c" Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.520883 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/crc-debug-n5t5c" Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.841305 4768 generic.go:334] "Generic (PLEG): container finished" podID="9ff2beab-33ee-4e4e-b1d6-b4eada98ef32" containerID="01b86fbc7ff1f24c5eb7a4b2d8dca78d7fc8f152c2ea904f6b835da55ed2a527" exitCode=0 Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.841410 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbhb5/crc-debug-n5t5c" event={"ID":"9ff2beab-33ee-4e4e-b1d6-b4eada98ef32","Type":"ContainerDied","Data":"01b86fbc7ff1f24c5eb7a4b2d8dca78d7fc8f152c2ea904f6b835da55ed2a527"} Feb 23 19:38:47 crc kubenswrapper[4768]: I0223 19:38:47.841700 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbhb5/crc-debug-n5t5c" event={"ID":"9ff2beab-33ee-4e4e-b1d6-b4eada98ef32","Type":"ContainerStarted","Data":"7e603987f8145c9126b83abe0b52d192b9c4ee60b046305d02082f79cb50b00d"} Feb 23 19:38:48 crc kubenswrapper[4768]: I0223 19:38:48.262659 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sbhb5/crc-debug-n5t5c"] Feb 23 19:38:48 crc kubenswrapper[4768]: I0223 19:38:48.270335 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sbhb5/crc-debug-n5t5c"] Feb 23 19:38:48 crc kubenswrapper[4768]: I0223 19:38:48.945568 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/crc-debug-n5t5c" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.107765 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ff2beab-33ee-4e4e-b1d6-b4eada98ef32-host\") pod \"9ff2beab-33ee-4e4e-b1d6-b4eada98ef32\" (UID: \"9ff2beab-33ee-4e4e-b1d6-b4eada98ef32\") " Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.108204 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ff2beab-33ee-4e4e-b1d6-b4eada98ef32-host" (OuterVolumeSpecName: "host") pod "9ff2beab-33ee-4e4e-b1d6-b4eada98ef32" (UID: "9ff2beab-33ee-4e4e-b1d6-b4eada98ef32"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.108909 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf6s9\" (UniqueName: \"kubernetes.io/projected/9ff2beab-33ee-4e4e-b1d6-b4eada98ef32-kube-api-access-gf6s9\") pod \"9ff2beab-33ee-4e4e-b1d6-b4eada98ef32\" (UID: \"9ff2beab-33ee-4e4e-b1d6-b4eada98ef32\") " Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.109984 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9ff2beab-33ee-4e4e-b1d6-b4eada98ef32-host\") on node \"crc\" DevicePath \"\"" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.118540 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff2beab-33ee-4e4e-b1d6-b4eada98ef32-kube-api-access-gf6s9" (OuterVolumeSpecName: "kube-api-access-gf6s9") pod "9ff2beab-33ee-4e4e-b1d6-b4eada98ef32" (UID: "9ff2beab-33ee-4e4e-b1d6-b4eada98ef32"). InnerVolumeSpecName "kube-api-access-gf6s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.210917 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf6s9\" (UniqueName: \"kubernetes.io/projected/9ff2beab-33ee-4e4e-b1d6-b4eada98ef32-kube-api-access-gf6s9\") on node \"crc\" DevicePath \"\"" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.320810 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ff2beab-33ee-4e4e-b1d6-b4eada98ef32" path="/var/lib/kubelet/pods/9ff2beab-33ee-4e4e-b1d6-b4eada98ef32/volumes" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.488174 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sbhb5/crc-debug-f648q"] Feb 23 19:38:49 crc kubenswrapper[4768]: E0223 19:38:49.488664 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff2beab-33ee-4e4e-b1d6-b4eada98ef32" containerName="container-00" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.488684 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff2beab-33ee-4e4e-b1d6-b4eada98ef32" containerName="container-00" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.488893 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff2beab-33ee-4e4e-b1d6-b4eada98ef32" containerName="container-00" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.489568 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/crc-debug-f648q" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.616394 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5900e708-656c-4e12-add4-26dd584838d5-host\") pod \"crc-debug-f648q\" (UID: \"5900e708-656c-4e12-add4-26dd584838d5\") " pod="openshift-must-gather-sbhb5/crc-debug-f648q" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.617265 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thjzr\" (UniqueName: \"kubernetes.io/projected/5900e708-656c-4e12-add4-26dd584838d5-kube-api-access-thjzr\") pod \"crc-debug-f648q\" (UID: \"5900e708-656c-4e12-add4-26dd584838d5\") " pod="openshift-must-gather-sbhb5/crc-debug-f648q" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.719310 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5900e708-656c-4e12-add4-26dd584838d5-host\") pod \"crc-debug-f648q\" (UID: \"5900e708-656c-4e12-add4-26dd584838d5\") " pod="openshift-must-gather-sbhb5/crc-debug-f648q" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.719419 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thjzr\" (UniqueName: \"kubernetes.io/projected/5900e708-656c-4e12-add4-26dd584838d5-kube-api-access-thjzr\") pod \"crc-debug-f648q\" (UID: \"5900e708-656c-4e12-add4-26dd584838d5\") " pod="openshift-must-gather-sbhb5/crc-debug-f648q" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.719446 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5900e708-656c-4e12-add4-26dd584838d5-host\") pod \"crc-debug-f648q\" (UID: \"5900e708-656c-4e12-add4-26dd584838d5\") " pod="openshift-must-gather-sbhb5/crc-debug-f648q" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.740941 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thjzr\" (UniqueName: \"kubernetes.io/projected/5900e708-656c-4e12-add4-26dd584838d5-kube-api-access-thjzr\") pod \"crc-debug-f648q\" (UID: \"5900e708-656c-4e12-add4-26dd584838d5\") " pod="openshift-must-gather-sbhb5/crc-debug-f648q" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.817669 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/crc-debug-f648q" Feb 23 19:38:49 crc kubenswrapper[4768]: W0223 19:38:49.846960 4768 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5900e708_656c_4e12_add4_26dd584838d5.slice/crio-1b730b0ee313775fb24cdb9fa245bc14a97479bca3734acf0b86e121063ec8fb WatchSource:0}: Error finding container 1b730b0ee313775fb24cdb9fa245bc14a97479bca3734acf0b86e121063ec8fb: Status 404 returned error can't find the container with id 1b730b0ee313775fb24cdb9fa245bc14a97479bca3734acf0b86e121063ec8fb Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.858942 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbhb5/crc-debug-f648q" event={"ID":"5900e708-656c-4e12-add4-26dd584838d5","Type":"ContainerStarted","Data":"1b730b0ee313775fb24cdb9fa245bc14a97479bca3734acf0b86e121063ec8fb"} Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.860990 4768 scope.go:117] "RemoveContainer" containerID="01b86fbc7ff1f24c5eb7a4b2d8dca78d7fc8f152c2ea904f6b835da55ed2a527" Feb 23 19:38:49 crc kubenswrapper[4768]: I0223 19:38:49.861049 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/crc-debug-n5t5c" Feb 23 19:38:50 crc kubenswrapper[4768]: I0223 19:38:50.869657 4768 generic.go:334] "Generic (PLEG): container finished" podID="5900e708-656c-4e12-add4-26dd584838d5" containerID="0e2937c476a874fc62291249a062619685899e8b7c8450621026fa72e3e8c46c" exitCode=0 Feb 23 19:38:50 crc kubenswrapper[4768]: I0223 19:38:50.869716 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbhb5/crc-debug-f648q" event={"ID":"5900e708-656c-4e12-add4-26dd584838d5","Type":"ContainerDied","Data":"0e2937c476a874fc62291249a062619685899e8b7c8450621026fa72e3e8c46c"} Feb 23 19:38:50 crc kubenswrapper[4768]: I0223 19:38:50.907969 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sbhb5/crc-debug-f648q"] Feb 23 19:38:50 crc kubenswrapper[4768]: I0223 19:38:50.919530 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sbhb5/crc-debug-f648q"] Feb 23 19:38:51 crc kubenswrapper[4768]: I0223 19:38:51.981077 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/crc-debug-f648q" Feb 23 19:38:52 crc kubenswrapper[4768]: I0223 19:38:52.161222 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thjzr\" (UniqueName: \"kubernetes.io/projected/5900e708-656c-4e12-add4-26dd584838d5-kube-api-access-thjzr\") pod \"5900e708-656c-4e12-add4-26dd584838d5\" (UID: \"5900e708-656c-4e12-add4-26dd584838d5\") " Feb 23 19:38:52 crc kubenswrapper[4768]: I0223 19:38:52.161558 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5900e708-656c-4e12-add4-26dd584838d5-host\") pod \"5900e708-656c-4e12-add4-26dd584838d5\" (UID: \"5900e708-656c-4e12-add4-26dd584838d5\") " Feb 23 19:38:52 crc kubenswrapper[4768]: I0223 19:38:52.161808 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5900e708-656c-4e12-add4-26dd584838d5-host" (OuterVolumeSpecName: "host") pod "5900e708-656c-4e12-add4-26dd584838d5" (UID: "5900e708-656c-4e12-add4-26dd584838d5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 23 19:38:52 crc kubenswrapper[4768]: I0223 19:38:52.162390 4768 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5900e708-656c-4e12-add4-26dd584838d5-host\") on node \"crc\" DevicePath \"\"" Feb 23 19:38:52 crc kubenswrapper[4768]: I0223 19:38:52.166698 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5900e708-656c-4e12-add4-26dd584838d5-kube-api-access-thjzr" (OuterVolumeSpecName: "kube-api-access-thjzr") pod "5900e708-656c-4e12-add4-26dd584838d5" (UID: "5900e708-656c-4e12-add4-26dd584838d5"). InnerVolumeSpecName "kube-api-access-thjzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:38:52 crc kubenswrapper[4768]: I0223 19:38:52.264294 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thjzr\" (UniqueName: \"kubernetes.io/projected/5900e708-656c-4e12-add4-26dd584838d5-kube-api-access-thjzr\") on node \"crc\" DevicePath \"\"" Feb 23 19:38:52 crc kubenswrapper[4768]: I0223 19:38:52.891363 4768 scope.go:117] "RemoveContainer" containerID="0e2937c476a874fc62291249a062619685899e8b7c8450621026fa72e3e8c46c" Feb 23 19:38:52 crc kubenswrapper[4768]: I0223 19:38:52.891434 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/crc-debug-f648q" Feb 23 19:38:53 crc kubenswrapper[4768]: I0223 19:38:53.318807 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5900e708-656c-4e12-add4-26dd584838d5" path="/var/lib/kubelet/pods/5900e708-656c-4e12-add4-26dd584838d5/volumes" Feb 23 19:38:57 crc kubenswrapper[4768]: I0223 19:38:57.307720 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:38:57 crc kubenswrapper[4768]: E0223 19:38:57.308407 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:39:12 crc kubenswrapper[4768]: I0223 19:39:12.308199 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:39:13 crc kubenswrapper[4768]: I0223 19:39:13.086030 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"cca6db9e075b2de225c4d2718ad25d00082b18220d57d4c9c143801aea4dacae"} Feb 23 19:39:28 crc kubenswrapper[4768]: I0223 19:39:28.175781 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7bc688ffdb-gftft_2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44/barbican-api/0.log" Feb 23 19:39:28 crc kubenswrapper[4768]: I0223 19:39:28.386276 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7bc688ffdb-gftft_2d4e76c7-6c4f-4e31-9a08-25bb4c5e1c44/barbican-api-log/0.log" Feb 23 19:39:28 crc kubenswrapper[4768]: I0223 19:39:28.463304 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5df7bc8868-6w74x_93487b6e-adae-4467-bc6f-022380ad3028/barbican-keystone-listener/0.log" Feb 23 19:39:28 crc kubenswrapper[4768]: I0223 19:39:28.498906 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5df7bc8868-6w74x_93487b6e-adae-4467-bc6f-022380ad3028/barbican-keystone-listener-log/0.log" Feb 23 19:39:28 crc kubenswrapper[4768]: I0223 19:39:28.703542 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9495fd7c-5kc55_df97f54a-8ff1-4de9-9a88-80561f4aa819/barbican-worker/0.log" Feb 23 19:39:28 crc kubenswrapper[4768]: I0223 19:39:28.707374 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-9495fd7c-5kc55_df97f54a-8ff1-4de9-9a88-80561f4aa819/barbican-worker-log/0.log" Feb 23 19:39:28 crc kubenswrapper[4768]: I0223 19:39:28.882085 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-s9tdc_dbe6c2e2-e359-4953-848a-c06651ec5760/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:28 crc kubenswrapper[4768]: I0223 19:39:28.955651 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19bdd7e2-6cde-4412-b74b-eedc6428ac63/ceilometer-central-agent/0.log" Feb 23 19:39:29 crc kubenswrapper[4768]: I0223 19:39:29.026469 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19bdd7e2-6cde-4412-b74b-eedc6428ac63/ceilometer-notification-agent/0.log" Feb 23 19:39:29 crc kubenswrapper[4768]: I0223 19:39:29.065612 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19bdd7e2-6cde-4412-b74b-eedc6428ac63/proxy-httpd/0.log" Feb 23 19:39:29 crc kubenswrapper[4768]: I0223 19:39:29.155060 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_19bdd7e2-6cde-4412-b74b-eedc6428ac63/sg-core/0.log" Feb 23 19:39:29 crc kubenswrapper[4768]: I0223 19:39:29.258951 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_41e166f4-a4aa-4185-b21d-36037d575748/cinder-api-log/0.log" Feb 23 19:39:29 crc kubenswrapper[4768]: I0223 19:39:29.283668 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_41e166f4-a4aa-4185-b21d-36037d575748/cinder-api/0.log" Feb 23 19:39:29 crc kubenswrapper[4768]: I0223 19:39:29.450116 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3ef90267-50a1-45c4-9c1e-95f2ce0bce4b/cinder-scheduler/0.log" Feb 23 19:39:29 crc kubenswrapper[4768]: I0223 19:39:29.489655 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3ef90267-50a1-45c4-9c1e-95f2ce0bce4b/probe/0.log" Feb 23 19:39:29 crc kubenswrapper[4768]: I0223 19:39:29.612021 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-rfxmq_fd5b2e52-1d19-459a-ae2f-a78b5a7df018/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:29 crc kubenswrapper[4768]: I0223 19:39:29.731920 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5ftnf_3945e9f4-308e-4769-a7b0-2984578eda25/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:29 crc kubenswrapper[4768]: I0223 19:39:29.892320 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-9b4p5_cae4398a-0817-4c3e-8449-9082d6d21b59/init/0.log" Feb 23 19:39:30 crc kubenswrapper[4768]: I0223 19:39:30.109701 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-9b4p5_cae4398a-0817-4c3e-8449-9082d6d21b59/init/0.log" Feb 23 19:39:30 crc kubenswrapper[4768]: I0223 19:39:30.116829 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-9b4p5_cae4398a-0817-4c3e-8449-9082d6d21b59/dnsmasq-dns/0.log" Feb 23 19:39:30 crc kubenswrapper[4768]: I0223 19:39:30.160804 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-j9qc7_964d25fb-0600-4332-9f40-85f700d35088/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:30 crc kubenswrapper[4768]: I0223 19:39:30.521365 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ae86f9fa-10bf-4fbc-b768-0ac7e643483b/glance-httpd/0.log" Feb 23 19:39:30 crc kubenswrapper[4768]: I0223 19:39:30.607090 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_ae86f9fa-10bf-4fbc-b768-0ac7e643483b/glance-log/0.log" Feb 23 19:39:30 crc kubenswrapper[4768]: I0223 19:39:30.716314 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_95f60b43-7764-4d1c-bf7f-150e7fceef75/glance-log/0.log" Feb 23 19:39:30 crc kubenswrapper[4768]: I0223 19:39:30.769272 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_95f60b43-7764-4d1c-bf7f-150e7fceef75/glance-httpd/0.log" Feb 23 19:39:30 crc kubenswrapper[4768]: I0223 19:39:30.907518 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-58cc9986b4-t7tcs_5fe017d9-f16b-465c-97a0-ebe4466006f0/horizon/0.log" Feb 23 19:39:31 crc kubenswrapper[4768]: I0223 19:39:31.081671 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-j9g8f_de5a4703-0650-427d-a791-f9a3386ca413/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:31 crc kubenswrapper[4768]: I0223 19:39:31.297803 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lgdpl_fa8ac6dd-0b71-465d-8658-5c10d07f1e0c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:31 crc kubenswrapper[4768]: I0223 19:39:31.347966 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-58cc9986b4-t7tcs_5fe017d9-f16b-465c-97a0-ebe4466006f0/horizon-log/0.log" Feb 23 19:39:31 crc kubenswrapper[4768]: I0223 19:39:31.556413 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7f7bc597d-jphlt_f3305106-4005-472a-980a-3030ee27d1bb/keystone-api/0.log" Feb 23 19:39:31 crc kubenswrapper[4768]: I0223 19:39:31.571315 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29531221-9gvxw_2f06a77a-756a-4cc8-9cea-c6c0da57bfd0/keystone-cron/0.log" Feb 23 19:39:31 crc kubenswrapper[4768]: I0223 19:39:31.786749 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e07b92be-5204-4ddb-97de-24984c997328/kube-state-metrics/0.log" Feb 23 19:39:31 crc kubenswrapper[4768]: I0223 19:39:31.793456 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-8bs7w_e4de542c-566e-4b7a-a999-04b1219e40a6/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:32 crc kubenswrapper[4768]: I0223 19:39:32.204287 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-546cfc7689-gsp5x_e861983f-c70e-47f3-936d-202ae74a1144/neutron-httpd/0.log" Feb 23 19:39:32 crc kubenswrapper[4768]: I0223 19:39:32.224895 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-546cfc7689-gsp5x_e861983f-c70e-47f3-936d-202ae74a1144/neutron-api/0.log" Feb 23 19:39:32 crc kubenswrapper[4768]: I0223 19:39:32.435616 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-kgqrz_8126924c-9f66-4df2-ac7c-eedcd34153b7/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:32 crc kubenswrapper[4768]: I0223 19:39:32.985627 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b973b91e-764a-461b-a4ca-50185f1f70af/nova-api-log/0.log" Feb 23 19:39:33 crc kubenswrapper[4768]: I0223 19:39:33.080039 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_22014113-0a8e-4444-b685-5ab40ffc8402/nova-cell0-conductor-conductor/0.log" Feb 23 19:39:33 crc kubenswrapper[4768]: I0223 19:39:33.389643 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b973b91e-764a-461b-a4ca-50185f1f70af/nova-api-api/0.log" Feb 23 19:39:33 crc kubenswrapper[4768]: I0223 19:39:33.407092 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_15bf982c-902c-45c7-9620-095ec38e9b86/nova-cell1-conductor-conductor/0.log" Feb 23 19:39:33 crc kubenswrapper[4768]: I0223 19:39:33.464931 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_c6a6d01d-0bb4-43aa-85c6-699d47fd2711/nova-cell1-novncproxy-novncproxy/0.log" Feb 23 19:39:33 crc kubenswrapper[4768]: I0223 19:39:33.633571 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-f79cf_4a3528f8-0776-47bf-81fa-c7bd1698938b/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:33 crc kubenswrapper[4768]: I0223 19:39:33.738449 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_08feb509-1dff-446f-bdf1-47c5bc09f772/nova-metadata-log/0.log" Feb 23 19:39:34 crc kubenswrapper[4768]: I0223 19:39:34.303048 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e2b0c66e-d534-4e7d-91dc-f05f5f857a43/mysql-bootstrap/0.log" Feb 23 19:39:34 crc kubenswrapper[4768]: I0223 19:39:34.384238 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7a66d66d-e9d1-4407-9e7e-268f1e7f0feb/nova-scheduler-scheduler/0.log" Feb 23 19:39:34 crc kubenswrapper[4768]: I0223 19:39:34.435992 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e2b0c66e-d534-4e7d-91dc-f05f5f857a43/mysql-bootstrap/0.log" Feb 23 19:39:34 crc kubenswrapper[4768]: I0223 19:39:34.584487 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e2b0c66e-d534-4e7d-91dc-f05f5f857a43/galera/0.log" Feb 23 19:39:34 crc kubenswrapper[4768]: I0223 19:39:34.680325 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f2d53e56-3a7e-48fa-b0ea-59b932d3b25a/mysql-bootstrap/0.log" Feb 23 19:39:34 crc kubenswrapper[4768]: I0223 19:39:34.866602 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f2d53e56-3a7e-48fa-b0ea-59b932d3b25a/mysql-bootstrap/0.log" Feb 23 19:39:34 crc kubenswrapper[4768]: I0223 19:39:34.906360 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f2d53e56-3a7e-48fa-b0ea-59b932d3b25a/galera/0.log" Feb 23 19:39:35 crc kubenswrapper[4768]: I0223 19:39:35.130094 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7fa93987-e84a-4fa8-97ab-4df24aabb201/openstackclient/0.log" Feb 23 19:39:35 crc kubenswrapper[4768]: I0223 19:39:35.146307 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_08feb509-1dff-446f-bdf1-47c5bc09f772/nova-metadata-metadata/0.log" Feb 23 19:39:35 crc kubenswrapper[4768]: I0223 19:39:35.177964 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-7xj45_6c33d166-1e3e-46c5-a725-472499a5efab/ovn-controller/0.log" Feb 23 19:39:35 crc kubenswrapper[4768]: I0223 19:39:35.410140 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-c45dt_1c73dca1-1a57-4c3a-8337-dba75d7e7b9c/openstack-network-exporter/0.log" Feb 23 19:39:35 crc kubenswrapper[4768]: I0223 19:39:35.418741 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9r6tg_3c7bf964-ae59-40e5-9a0c-8fd8068b6695/ovsdb-server-init/0.log" Feb 23 19:39:35 crc kubenswrapper[4768]: I0223 19:39:35.576082 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9r6tg_3c7bf964-ae59-40e5-9a0c-8fd8068b6695/ovsdb-server-init/0.log" Feb 23 19:39:35 crc kubenswrapper[4768]: I0223 19:39:35.624353 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9r6tg_3c7bf964-ae59-40e5-9a0c-8fd8068b6695/ovsdb-server/0.log" Feb 23 19:39:35 crc kubenswrapper[4768]: I0223 19:39:35.669933 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9r6tg_3c7bf964-ae59-40e5-9a0c-8fd8068b6695/ovs-vswitchd/0.log" Feb 23 19:39:35 crc kubenswrapper[4768]: I0223 19:39:35.823573 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-bkgsc_76867435-2307-4032-a6ae-203f8009d08d/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:35 crc kubenswrapper[4768]: I0223 19:39:35.844862 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ffe1b163-3686-4036-8f27-a4b600234d8a/openstack-network-exporter/0.log" Feb 23 19:39:35 crc kubenswrapper[4768]: I0223 19:39:35.882075 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_ffe1b163-3686-4036-8f27-a4b600234d8a/ovn-northd/0.log" Feb 23 19:39:36 crc kubenswrapper[4768]: I0223 19:39:36.148789 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3b458d35-3ae1-4a39-b1e5-dcfef430f299/openstack-network-exporter/0.log" Feb 23 19:39:36 crc kubenswrapper[4768]: I0223 19:39:36.195271 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_3b458d35-3ae1-4a39-b1e5-dcfef430f299/ovsdbserver-nb/0.log" Feb 23 19:39:36 crc kubenswrapper[4768]: I0223 19:39:36.315898 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7a43d0d6-32a5-4617-8613-e7fb22a39303/openstack-network-exporter/0.log" Feb 23 19:39:36 crc kubenswrapper[4768]: I0223 19:39:36.319391 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7a43d0d6-32a5-4617-8613-e7fb22a39303/ovsdbserver-sb/0.log" Feb 23 19:39:36 crc kubenswrapper[4768]: I0223 19:39:36.493943 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-574fcfd8cb-8sv54_77c8192d-2048-476f-af50-d65602ec4d05/placement-api/0.log" Feb 23 19:39:36 crc kubenswrapper[4768]: I0223 19:39:36.617827 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-574fcfd8cb-8sv54_77c8192d-2048-476f-af50-d65602ec4d05/placement-log/0.log" Feb 23 19:39:36 crc kubenswrapper[4768]: I0223 19:39:36.702576 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b8cb5a51-f628-42ca-9f9a-002d2f2f3b00/setup-container/0.log" Feb 23 19:39:36 crc kubenswrapper[4768]: I0223 19:39:36.879573 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b8cb5a51-f628-42ca-9f9a-002d2f2f3b00/setup-container/0.log" Feb 23 19:39:36 crc kubenswrapper[4768]: I0223 19:39:36.881892 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b8cb5a51-f628-42ca-9f9a-002d2f2f3b00/rabbitmq/0.log" Feb 23 19:39:36 crc kubenswrapper[4768]: I0223 19:39:36.928669 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc/setup-container/0.log" Feb 23 19:39:37 crc kubenswrapper[4768]: I0223 19:39:37.150816 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc/setup-container/0.log" Feb 23 19:39:37 crc kubenswrapper[4768]: I0223 19:39:37.162734 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3d3a3cc7-cb0f-40c3-b54c-86517ddf3efc/rabbitmq/0.log" Feb 23 19:39:37 crc kubenswrapper[4768]: I0223 19:39:37.276239 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-z4hnc_68e380e8-220c-4c0e-88e4-a818fb37fe57/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:37 crc kubenswrapper[4768]: I0223 19:39:37.447562 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-9hqm7_34748e05-17f0-4701-936b-a023c3456a93/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:37 crc kubenswrapper[4768]: I0223 19:39:37.464145 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-xlmjw_a7d9a362-95f1-4326-99a7-121ec8a4816f/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:37 crc kubenswrapper[4768]: I0223 19:39:37.892151 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-ks98v_63675404-f203-4967-9c2b-817ff4d8715c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:37 crc kubenswrapper[4768]: I0223 19:39:37.971081 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-bcnv8_18767704-7745-4fb0-8802-3dc2bf209bbe/ssh-known-hosts-edpm-deployment/0.log" Feb 23 19:39:38 crc kubenswrapper[4768]: I0223 19:39:38.143882 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-66dcf5bf6c-4q2hn_70d5ee44-4e4a-4f31-8104-a72d66f78d72/proxy-server/0.log" Feb 23 19:39:38 crc kubenswrapper[4768]: I0223 19:39:38.245187 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-66dcf5bf6c-4q2hn_70d5ee44-4e4a-4f31-8104-a72d66f78d72/proxy-httpd/0.log" Feb 23 19:39:38 crc kubenswrapper[4768]: I0223 19:39:38.331853 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9nswb_1ddb02d3-f5a2-4681-90fe-4d5572fed381/swift-ring-rebalance/0.log" Feb 23 19:39:38 crc kubenswrapper[4768]: I0223 19:39:38.439987 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/account-auditor/0.log" Feb 23 19:39:38 crc kubenswrapper[4768]: I0223 19:39:38.508659 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/account-reaper/0.log" Feb 23 19:39:38 crc kubenswrapper[4768]: I0223 19:39:38.565318 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/account-replicator/0.log" Feb 23 19:39:38 crc kubenswrapper[4768]: I0223 19:39:38.630615 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/container-auditor/0.log" Feb 23 19:39:38 crc kubenswrapper[4768]: I0223 19:39:38.649821 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/account-server/0.log" Feb 23 19:39:38 crc kubenswrapper[4768]: I0223 19:39:38.797672 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/container-replicator/0.log" Feb 23 19:39:38 crc kubenswrapper[4768]: I0223 19:39:38.828887 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/container-server/0.log" Feb 23 19:39:38 crc kubenswrapper[4768]: I0223 19:39:38.838116 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/container-updater/0.log" Feb 23 19:39:38 crc kubenswrapper[4768]: I0223 19:39:38.877213 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/object-auditor/0.log" Feb 23 19:39:38 crc kubenswrapper[4768]: I0223 19:39:38.989029 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/object-expirer/0.log" Feb 23 19:39:39 crc kubenswrapper[4768]: I0223 19:39:39.041881 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/object-replicator/0.log" Feb 23 19:39:39 crc kubenswrapper[4768]: I0223 19:39:39.086651 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/object-server/0.log" Feb 23 19:39:39 crc kubenswrapper[4768]: I0223 19:39:39.102058 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/object-updater/0.log" Feb 23 19:39:39 crc kubenswrapper[4768]: I0223 19:39:39.230708 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/swift-recon-cron/0.log" Feb 23 19:39:39 crc kubenswrapper[4768]: I0223 19:39:39.255909 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_c2932248-edbb-4073-8a18-d076462b4201/rsync/0.log" Feb 23 19:39:39 crc kubenswrapper[4768]: I0223 19:39:39.447829 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-r9z6x_2393d837-c9f2-4896-ab3e-32924e48359a/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:39 crc kubenswrapper[4768]: I0223 19:39:39.485009 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_89c93f99-08a8-4231-8b96-d307d0525745/tempest-tests-tempest-tests-runner/0.log" Feb 23 19:39:39 crc kubenswrapper[4768]: I0223 19:39:39.720134 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_0f9b3373-10b7-4e2c-8b9f-985eb74fb53d/test-operator-logs-container/0.log" Feb 23 19:39:39 crc kubenswrapper[4768]: I0223 19:39:39.729605 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6jcqk_c1470b37-b104-4991-a626-59fcd3936f2c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 23 19:39:48 crc kubenswrapper[4768]: I0223 19:39:48.164629 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_065294f2-15e0-4aeb-9002-9602051bf4ff/memcached/0.log" Feb 23 19:40:07 crc kubenswrapper[4768]: I0223 19:40:07.668937 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/util/0.log" Feb 23 19:40:07 crc kubenswrapper[4768]: I0223 19:40:07.932010 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/pull/0.log" Feb 23 19:40:07 crc kubenswrapper[4768]: I0223 19:40:07.938482 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/util/0.log" Feb 23 19:40:07 crc kubenswrapper[4768]: I0223 19:40:07.980491 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/pull/0.log" Feb 23 19:40:08 crc kubenswrapper[4768]: I0223 19:40:08.147925 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/pull/0.log" Feb 23 19:40:08 crc kubenswrapper[4768]: I0223 19:40:08.169020 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/util/0.log" Feb 23 19:40:08 crc kubenswrapper[4768]: I0223 19:40:08.200032 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c3e646651086af3874997b7f0c57e9bfc49229f1d298b61933ce276d62bk44c_1931c996-5088-425f-9e39-ef898c8742d8/extract/0.log" Feb 23 19:40:08 crc kubenswrapper[4768]: I0223 19:40:08.513759 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-cprlh_aba58523-2fad-45af-87ee-a347b586ad4b/manager/0.log" Feb 23 19:40:08 crc kubenswrapper[4768]: I0223 19:40:08.873031 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-784b5bb6c5-chqsr_be4fc57a-a006-4068-be4b-5bdeb50f48b4/manager/0.log" Feb 23 19:40:09 crc kubenswrapper[4768]: I0223 19:40:09.058382 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-qnwrc_60e38add-201e-4431-90df-d9c31ba57f39/manager/0.log" Feb 23 19:40:09 crc kubenswrapper[4768]: I0223 19:40:09.318524 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-stm2m_0f6c6c75-0fda-41cc-b05f-cfc6e935f82b/manager/0.log" Feb 23 19:40:09 crc kubenswrapper[4768]: I0223 19:40:09.814191 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-wwlql_d52cf386-a646-44c0-8394-cdf497e52ebe/manager/0.log" Feb 23 19:40:09 crc kubenswrapper[4768]: I0223 19:40:09.912438 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-gn242_02eb4c80-855b-4590-b09e-d6e6b7919f74/manager/0.log" Feb 23 19:40:10 crc kubenswrapper[4768]: I0223 19:40:10.209076 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-l5mqh_0522a131-cf71-4a3e-b60a-fa16371d47d8/manager/0.log" Feb 23 19:40:10 crc kubenswrapper[4768]: I0223 19:40:10.480894 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-mng89_b16ba816-bafa-430e-b18a-5afa27bc0abb/manager/0.log" Feb 23 19:40:10 crc kubenswrapper[4768]: I0223 19:40:10.664708 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-xm2kv_0f3afa5e-021e-4226-9734-38d4da145e0a/manager/0.log" Feb 23 19:40:11 crc kubenswrapper[4768]: I0223 19:40:11.008663 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6bd4687957-w5x47_9afd4512-6186-4cb8-a8ba-90628662efba/manager/0.log" Feb 23 19:40:11 crc kubenswrapper[4768]: I0223 19:40:11.032152 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-mzwrn_8f3b00ff-a5fc-422c-81fd-e9c0e2a6bf1b/manager/0.log" Feb 23 19:40:11 crc kubenswrapper[4768]: I0223 19:40:11.344698 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-pmp8k_cbbc4a69-26c2-4d05-b369-aa142f5a04d2/manager/0.log" Feb 23 19:40:11 crc kubenswrapper[4768]: I0223 19:40:11.408674 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-659dc6bbfc-7vrp5_c7086dd9-9e6f-4207-a037-99369dc6e980/manager/0.log" Feb 23 19:40:11 crc kubenswrapper[4768]: I0223 19:40:11.669189 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cb6g69_fff6d2ff-130f-45ae-943a-28b8740298c2/manager/0.log" Feb 23 19:40:12 crc kubenswrapper[4768]: I0223 19:40:12.458579 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5dfcfd9b6-jhz5n_80dc5267-2395-41a2-8e61-152b0acbc24c/operator/0.log" Feb 23 19:40:12 crc kubenswrapper[4768]: I0223 19:40:12.497494 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cwqr7_95798783-c266-4139-a43a-b4fbf879c1b8/registry-server/0.log" Feb 23 19:40:12 crc kubenswrapper[4768]: I0223 19:40:12.725798 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5955d8c787-g9dpw_435b416a-a73b-420a-9f48-99be70b4e110/manager/0.log" Feb 23 19:40:12 crc kubenswrapper[4768]: I0223 19:40:12.856675 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-t5qm2_ea71893a-6b37-4cc9-b0f5-be711669e8d1/manager/0.log" Feb 23 19:40:12 crc kubenswrapper[4768]: I0223 19:40:12.960407 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-dhmmp_d74d7097-0324-4bb7-83c6-fa8cea69c1b4/operator/0.log" Feb 23 19:40:13 crc kubenswrapper[4768]: I0223 19:40:13.166454 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-q66cg_13137d15-ffaa-4127-9885-91e9a6fd6a65/manager/0.log" Feb 23 19:40:13 crc kubenswrapper[4768]: I0223 19:40:13.452748 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5dc6794d5b-nc28p_0b78a9a3-5a2b-435d-8e2f-661eddd91177/manager/0.log" Feb 23 19:40:13 crc kubenswrapper[4768]: I0223 19:40:13.464052 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-589c568786-6wfdk_034d1fc6-6b51-4e9a-99f9-67038d4c9926/manager/0.log" Feb 23 19:40:13 crc kubenswrapper[4768]: I0223 19:40:13.659745 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-gn98t_86030533-da46-4579-a1ce-67f3d96c7a90/manager/0.log" Feb 23 19:40:13 crc kubenswrapper[4768]: I0223 19:40:13.771050 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7dfcb74874-dxkzr_92c4522a-291f-4c44-8e08-8e4002685f66/manager/0.log" Feb 23 19:40:16 crc kubenswrapper[4768]: I0223 19:40:16.994188 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-cj6bl_97f25c43-f624-4320-b34b-789df5cab5f3/manager/0.log" Feb 23 19:40:35 crc kubenswrapper[4768]: I0223 19:40:35.028396 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6qzzx_30b720f3-fda0-41f1-bca9-e52fe84a3535/control-plane-machine-set-operator/0.log" Feb 23 19:40:35 crc kubenswrapper[4768]: I0223 19:40:35.144159 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vn4nn_4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de/kube-rbac-proxy/0.log" Feb 23 19:40:35 crc kubenswrapper[4768]: I0223 19:40:35.196742 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vn4nn_4cc7ad33-9f56-4f86-bf59-5cd21b4fc3de/machine-api-operator/0.log" Feb 23 19:40:49 crc kubenswrapper[4768]: I0223 19:40:49.421866 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2pxdp_ce41d193-31cd-4318-b8a6-9f0663e19dd1/cert-manager-controller/0.log" Feb 23 19:40:49 crc kubenswrapper[4768]: I0223 19:40:49.545872 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-kbhg9_2434360d-4475-492b-b0d6-d2105f2cf727/cert-manager-cainjector/0.log" Feb 23 19:40:49 crc kubenswrapper[4768]: I0223 19:40:49.622505 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-5xqnq_9e4e6814-0ed0-42f2-a94e-27bb939aa62f/cert-manager-webhook/0.log" Feb 23 19:41:04 crc kubenswrapper[4768]: I0223 19:41:04.709746 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-dstrl_37f3006a-1eda-448a-9a9a-77dd20f51534/nmstate-console-plugin/0.log" Feb 23 19:41:04 crc kubenswrapper[4768]: I0223 19:41:04.846465 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-sq2t7_34f1b59b-1b5b-4093-bf9b-97d19e3118e2/nmstate-handler/0.log" Feb 23 19:41:04 crc kubenswrapper[4768]: I0223 19:41:04.922460 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-7sgf5_405a4831-883b-4d37-9b41-50b60a1268bf/kube-rbac-proxy/0.log" Feb 23 19:41:04 crc kubenswrapper[4768]: I0223 19:41:04.977955 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-7sgf5_405a4831-883b-4d37-9b41-50b60a1268bf/nmstate-metrics/0.log" Feb 23 19:41:05 crc kubenswrapper[4768]: I0223 19:41:05.088844 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-twf42_13c778fb-2aa4-4078-8393-45d0334de750/nmstate-operator/0.log" Feb 23 19:41:05 crc kubenswrapper[4768]: I0223 19:41:05.183783 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-w8cwv_f792bcb6-c414-4f4a-ae75-528cbe81b29d/nmstate-webhook/0.log" Feb 23 19:41:35 crc kubenswrapper[4768]: I0223 19:41:35.544347 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-8snqf_a02480fd-a2d6-4364-b83f-e01dfa5a6676/kube-rbac-proxy/0.log" Feb 23 19:41:35 crc kubenswrapper[4768]: I0223 19:41:35.690538 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-8snqf_a02480fd-a2d6-4364-b83f-e01dfa5a6676/controller/0.log" Feb 23 19:41:35 crc kubenswrapper[4768]: I0223 19:41:35.789073 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-frr-files/0.log" Feb 23 19:41:36 crc kubenswrapper[4768]: I0223 19:41:36.390566 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-reloader/0.log" Feb 23 19:41:36 crc kubenswrapper[4768]: I0223 19:41:36.432374 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-reloader/0.log" Feb 23 19:41:36 crc kubenswrapper[4768]: I0223 19:41:36.434758 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-metrics/0.log" Feb 23 19:41:36 crc kubenswrapper[4768]: I0223 19:41:36.449705 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-frr-files/0.log" Feb 23 19:41:36 crc kubenswrapper[4768]: I0223 19:41:36.574595 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-reloader/0.log" Feb 23 19:41:36 crc kubenswrapper[4768]: I0223 19:41:36.575165 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-frr-files/0.log" Feb 23 19:41:36 crc kubenswrapper[4768]: I0223 19:41:36.636333 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-metrics/0.log" Feb 23 19:41:36 crc kubenswrapper[4768]: I0223 19:41:36.694896 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-metrics/0.log" Feb 23 19:41:36 crc kubenswrapper[4768]: I0223 19:41:36.807936 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-frr-files/0.log" Feb 23 19:41:36 crc kubenswrapper[4768]: I0223 19:41:36.813992 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-metrics/0.log" Feb 23 19:41:36 crc kubenswrapper[4768]: I0223 19:41:36.846776 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/cp-reloader/0.log" Feb 23 19:41:36 crc kubenswrapper[4768]: I0223 19:41:36.889548 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/controller/0.log" Feb 23 19:41:37 crc kubenswrapper[4768]: I0223 19:41:37.016824 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/frr-metrics/0.log" Feb 23 19:41:37 crc kubenswrapper[4768]: I0223 19:41:37.026040 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/kube-rbac-proxy/0.log" Feb 23 19:41:37 crc kubenswrapper[4768]: I0223 19:41:37.084441 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/kube-rbac-proxy-frr/0.log" Feb 23 19:41:37 crc kubenswrapper[4768]: I0223 19:41:37.271650 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/reloader/0.log" Feb 23 19:41:37 crc kubenswrapper[4768]: I0223 19:41:37.286692 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-sglwm_06a269f1-e448-49da-b22d-7ef6bcfe31e1/frr-k8s-webhook-server/0.log" Feb 23 19:41:37 crc kubenswrapper[4768]: I0223 19:41:37.902688 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-655544f676-lzj52_e250524b-d6cd-444e-9e6b-3a2a5387d3b2/manager/0.log" Feb 23 19:41:38 crc kubenswrapper[4768]: I0223 19:41:38.158329 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5cf8d9bdbb-l2w9j_8c32327a-6231-46a7-9d4b-e0ef86979632/webhook-server/0.log" Feb 23 19:41:38 crc kubenswrapper[4768]: I0223 19:41:38.211541 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-knv9f_bc147539-1205-4a1f-82d6-ca40f47d37d0/kube-rbac-proxy/0.log" Feb 23 19:41:38 crc kubenswrapper[4768]: I0223 19:41:38.297806 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lhjzv_2c2223f2-8fac-4021-b096-4087bac80ab0/frr/0.log" Feb 23 19:41:38 crc kubenswrapper[4768]: I0223 19:41:38.649972 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-knv9f_bc147539-1205-4a1f-82d6-ca40f47d37d0/speaker/0.log" Feb 23 19:41:39 crc kubenswrapper[4768]: I0223 19:41:39.545421 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:41:39 crc kubenswrapper[4768]: I0223 19:41:39.545489 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:41:52 crc kubenswrapper[4768]: I0223 19:41:52.631427 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/util/0.log" Feb 23 19:41:52 crc kubenswrapper[4768]: I0223 19:41:52.817153 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/pull/0.log" Feb 23 19:41:52 crc kubenswrapper[4768]: I0223 19:41:52.818055 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/pull/0.log" Feb 23 19:41:52 crc kubenswrapper[4768]: I0223 19:41:52.818174 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/util/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.003728 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/extract/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.012523 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/util/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.014272 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213rn9qc_0b6937d2-6789-4b4e-bb7c-a298b8e23168/pull/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.169772 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/extract-utilities/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.325464 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/extract-utilities/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.342598 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/extract-content/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.357786 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/extract-content/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.528819 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/extract-utilities/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.535866 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/extract-content/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.753304 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/extract-utilities/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.906034 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/extract-content/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.934446 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/extract-utilities/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.966185 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-w88p6_0b420d54-c936-4147-8f04-18d8c91b1701/registry-server/0.log" Feb 23 19:41:53 crc kubenswrapper[4768]: I0223 19:41:53.999084 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/extract-content/0.log" Feb 23 19:41:54 crc kubenswrapper[4768]: I0223 19:41:54.171897 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/extract-utilities/0.log" Feb 23 19:41:54 crc kubenswrapper[4768]: I0223 19:41:54.238049 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/extract-content/0.log" Feb 23 19:41:54 crc kubenswrapper[4768]: I0223 19:41:54.364751 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/util/0.log" Feb 23 19:41:54 crc kubenswrapper[4768]: I0223 19:41:54.640564 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lgjxp_f3ce0320-02ba-4678-aa24-65028a4a84a7/registry-server/0.log" Feb 23 19:41:54 crc kubenswrapper[4768]: I0223 19:41:54.649723 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/pull/0.log" Feb 23 19:41:54 crc kubenswrapper[4768]: I0223 19:41:54.666907 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/util/0.log" Feb 23 19:41:54 crc kubenswrapper[4768]: I0223 19:41:54.667383 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/pull/0.log" Feb 23 19:41:54 crc kubenswrapper[4768]: I0223 19:41:54.794271 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/pull/0.log" Feb 23 19:41:54 crc kubenswrapper[4768]: I0223 19:41:54.797657 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/util/0.log" Feb 23 19:41:54 crc kubenswrapper[4768]: I0223 19:41:54.833949 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecah7lpn_e6bb516f-a8f7-417d-bc13-cca686ed2bdd/extract/0.log" Feb 23 19:41:54 crc kubenswrapper[4768]: I0223 19:41:54.976727 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-n6tjv_c25ac972-0ed9-475d-b506-222f90fe52f9/marketplace-operator/0.log" Feb 23 19:41:54 crc kubenswrapper[4768]: I0223 19:41:54.996020 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/extract-utilities/0.log" Feb 23 19:41:55 crc kubenswrapper[4768]: I0223 19:41:55.342339 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/extract-utilities/0.log" Feb 23 19:41:55 crc kubenswrapper[4768]: I0223 19:41:55.344591 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/extract-content/0.log" Feb 23 19:41:55 crc kubenswrapper[4768]: I0223 19:41:55.358880 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/extract-content/0.log" Feb 23 19:41:55 crc kubenswrapper[4768]: I0223 19:41:55.510990 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/extract-utilities/0.log" Feb 23 19:41:55 crc kubenswrapper[4768]: I0223 19:41:55.513282 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/extract-content/0.log" Feb 23 19:41:55 crc kubenswrapper[4768]: I0223 19:41:55.691211 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-cnjln_039380f0-e2fb-42b8-a034-0ed97dc84cc5/registry-server/0.log" Feb 23 19:41:55 crc kubenswrapper[4768]: I0223 19:41:55.763636 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/extract-utilities/0.log" Feb 23 19:41:55 crc kubenswrapper[4768]: I0223 19:41:55.906034 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/extract-utilities/0.log" Feb 23 19:41:55 crc kubenswrapper[4768]: I0223 19:41:55.930216 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/extract-content/0.log" Feb 23 19:41:55 crc kubenswrapper[4768]: I0223 19:41:55.954793 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/extract-content/0.log" Feb 23 19:41:56 crc kubenswrapper[4768]: I0223 19:41:56.101660 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/extract-content/0.log" Feb 23 19:41:56 crc kubenswrapper[4768]: I0223 19:41:56.168152 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/extract-utilities/0.log" Feb 23 19:41:56 crc kubenswrapper[4768]: I0223 19:41:56.514459 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5m2c_03532675-9efc-4d5c-ae55-5c9e1d240346/registry-server/0.log" Feb 23 19:42:09 crc kubenswrapper[4768]: I0223 19:42:09.544613 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:42:09 crc kubenswrapper[4768]: I0223 19:42:09.545224 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:42:39 crc kubenswrapper[4768]: I0223 19:42:39.545148 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:42:39 crc kubenswrapper[4768]: I0223 19:42:39.545772 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:42:39 crc kubenswrapper[4768]: I0223 19:42:39.545818 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 19:42:39 crc kubenswrapper[4768]: I0223 19:42:39.546578 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cca6db9e075b2de225c4d2718ad25d00082b18220d57d4c9c143801aea4dacae"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 19:42:39 crc kubenswrapper[4768]: I0223 19:42:39.546642 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://cca6db9e075b2de225c4d2718ad25d00082b18220d57d4c9c143801aea4dacae" gracePeriod=600 Feb 23 19:42:40 crc kubenswrapper[4768]: I0223 19:42:40.068035 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="cca6db9e075b2de225c4d2718ad25d00082b18220d57d4c9c143801aea4dacae" exitCode=0 Feb 23 19:42:40 crc kubenswrapper[4768]: I0223 19:42:40.068496 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"cca6db9e075b2de225c4d2718ad25d00082b18220d57d4c9c143801aea4dacae"} Feb 23 19:42:40 crc kubenswrapper[4768]: I0223 19:42:40.068537 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerStarted","Data":"bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4"} Feb 23 19:42:40 crc kubenswrapper[4768]: I0223 19:42:40.068564 4768 scope.go:117] "RemoveContainer" containerID="6da7cbf2f6d80bc1db94260bf834c861fbc811760ba823d1e5a8c9f049aca59c" Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.714209 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xddbj"] Feb 23 19:43:17 crc kubenswrapper[4768]: E0223 19:43:17.715546 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5900e708-656c-4e12-add4-26dd584838d5" containerName="container-00" Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.715569 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="5900e708-656c-4e12-add4-26dd584838d5" containerName="container-00" Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.715956 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="5900e708-656c-4e12-add4-26dd584838d5" containerName="container-00" Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.718386 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.732268 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xddbj"] Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.766901 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl4nf\" (UniqueName: \"kubernetes.io/projected/104c7146-9db3-42ee-b6c8-73af19c52f2c-kube-api-access-tl4nf\") pod \"redhat-marketplace-xddbj\" (UID: \"104c7146-9db3-42ee-b6c8-73af19c52f2c\") " pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.767047 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104c7146-9db3-42ee-b6c8-73af19c52f2c-catalog-content\") pod \"redhat-marketplace-xddbj\" (UID: \"104c7146-9db3-42ee-b6c8-73af19c52f2c\") " pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.767157 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104c7146-9db3-42ee-b6c8-73af19c52f2c-utilities\") pod \"redhat-marketplace-xddbj\" (UID: \"104c7146-9db3-42ee-b6c8-73af19c52f2c\") " pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.868353 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104c7146-9db3-42ee-b6c8-73af19c52f2c-catalog-content\") pod \"redhat-marketplace-xddbj\" (UID: \"104c7146-9db3-42ee-b6c8-73af19c52f2c\") " pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.868446 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104c7146-9db3-42ee-b6c8-73af19c52f2c-utilities\") pod \"redhat-marketplace-xddbj\" (UID: \"104c7146-9db3-42ee-b6c8-73af19c52f2c\") " pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.868494 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl4nf\" (UniqueName: \"kubernetes.io/projected/104c7146-9db3-42ee-b6c8-73af19c52f2c-kube-api-access-tl4nf\") pod \"redhat-marketplace-xddbj\" (UID: \"104c7146-9db3-42ee-b6c8-73af19c52f2c\") " pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.869089 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104c7146-9db3-42ee-b6c8-73af19c52f2c-catalog-content\") pod \"redhat-marketplace-xddbj\" (UID: \"104c7146-9db3-42ee-b6c8-73af19c52f2c\") " pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.869500 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104c7146-9db3-42ee-b6c8-73af19c52f2c-utilities\") pod \"redhat-marketplace-xddbj\" (UID: \"104c7146-9db3-42ee-b6c8-73af19c52f2c\") " pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:17 crc kubenswrapper[4768]: I0223 19:43:17.901706 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl4nf\" (UniqueName: \"kubernetes.io/projected/104c7146-9db3-42ee-b6c8-73af19c52f2c-kube-api-access-tl4nf\") pod \"redhat-marketplace-xddbj\" (UID: \"104c7146-9db3-42ee-b6c8-73af19c52f2c\") " pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:18 crc kubenswrapper[4768]: I0223 19:43:18.052293 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:18 crc kubenswrapper[4768]: I0223 19:43:18.533256 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xddbj"] Feb 23 19:43:19 crc kubenswrapper[4768]: I0223 19:43:19.528240 4768 generic.go:334] "Generic (PLEG): container finished" podID="104c7146-9db3-42ee-b6c8-73af19c52f2c" containerID="1e08694b0779b2ae3b2e77a1728ea0be0d1a0c6fd5d9c08ae3a3e9cc5751376b" exitCode=0 Feb 23 19:43:19 crc kubenswrapper[4768]: I0223 19:43:19.528306 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xddbj" event={"ID":"104c7146-9db3-42ee-b6c8-73af19c52f2c","Type":"ContainerDied","Data":"1e08694b0779b2ae3b2e77a1728ea0be0d1a0c6fd5d9c08ae3a3e9cc5751376b"} Feb 23 19:43:19 crc kubenswrapper[4768]: I0223 19:43:19.528786 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xddbj" event={"ID":"104c7146-9db3-42ee-b6c8-73af19c52f2c","Type":"ContainerStarted","Data":"c1bf36bb3ff95d58ab06ef4e06286858c3ad206a35d0d2b3b9011e40fcf4b417"} Feb 23 19:43:19 crc kubenswrapper[4768]: I0223 19:43:19.530594 4768 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 23 19:43:20 crc kubenswrapper[4768]: I0223 19:43:20.543232 4768 generic.go:334] "Generic (PLEG): container finished" podID="104c7146-9db3-42ee-b6c8-73af19c52f2c" containerID="391ed998d6d9ca4b44dd2fb84972671e1efe1053ca8dfe8fc100821494c3fb7a" exitCode=0 Feb 23 19:43:20 crc kubenswrapper[4768]: I0223 19:43:20.543323 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xddbj" event={"ID":"104c7146-9db3-42ee-b6c8-73af19c52f2c","Type":"ContainerDied","Data":"391ed998d6d9ca4b44dd2fb84972671e1efe1053ca8dfe8fc100821494c3fb7a"} Feb 23 19:43:21 crc kubenswrapper[4768]: I0223 19:43:21.558196 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xddbj" event={"ID":"104c7146-9db3-42ee-b6c8-73af19c52f2c","Type":"ContainerStarted","Data":"f3a32cc00b997445b4cd9113b871519e51c0a20a18fc20df2917ec6279217536"} Feb 23 19:43:21 crc kubenswrapper[4768]: I0223 19:43:21.581493 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xddbj" podStartSLOduration=3.177110121 podStartE2EDuration="4.581457708s" podCreationTimestamp="2026-02-23 19:43:17 +0000 UTC" firstStartedPulling="2026-02-23 19:43:19.530391852 +0000 UTC m=+4194.920877652" lastFinishedPulling="2026-02-23 19:43:20.934739439 +0000 UTC m=+4196.325225239" observedRunningTime="2026-02-23 19:43:21.57152307 +0000 UTC m=+4196.962008870" watchObservedRunningTime="2026-02-23 19:43:21.581457708 +0000 UTC m=+4196.971943548" Feb 23 19:43:28 crc kubenswrapper[4768]: I0223 19:43:28.052838 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:28 crc kubenswrapper[4768]: I0223 19:43:28.053364 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:28 crc kubenswrapper[4768]: I0223 19:43:28.342511 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:28 crc kubenswrapper[4768]: I0223 19:43:28.685827 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:28 crc kubenswrapper[4768]: I0223 19:43:28.781239 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xddbj"] Feb 23 19:43:30 crc kubenswrapper[4768]: I0223 19:43:30.639754 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xddbj" podUID="104c7146-9db3-42ee-b6c8-73af19c52f2c" containerName="registry-server" containerID="cri-o://f3a32cc00b997445b4cd9113b871519e51c0a20a18fc20df2917ec6279217536" gracePeriod=2 Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.147163 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.255435 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104c7146-9db3-42ee-b6c8-73af19c52f2c-catalog-content\") pod \"104c7146-9db3-42ee-b6c8-73af19c52f2c\" (UID: \"104c7146-9db3-42ee-b6c8-73af19c52f2c\") " Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.255693 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104c7146-9db3-42ee-b6c8-73af19c52f2c-utilities\") pod \"104c7146-9db3-42ee-b6c8-73af19c52f2c\" (UID: \"104c7146-9db3-42ee-b6c8-73af19c52f2c\") " Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.255738 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl4nf\" (UniqueName: \"kubernetes.io/projected/104c7146-9db3-42ee-b6c8-73af19c52f2c-kube-api-access-tl4nf\") pod \"104c7146-9db3-42ee-b6c8-73af19c52f2c\" (UID: \"104c7146-9db3-42ee-b6c8-73af19c52f2c\") " Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.258557 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/104c7146-9db3-42ee-b6c8-73af19c52f2c-utilities" (OuterVolumeSpecName: "utilities") pod "104c7146-9db3-42ee-b6c8-73af19c52f2c" (UID: "104c7146-9db3-42ee-b6c8-73af19c52f2c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.277134 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/104c7146-9db3-42ee-b6c8-73af19c52f2c-kube-api-access-tl4nf" (OuterVolumeSpecName: "kube-api-access-tl4nf") pod "104c7146-9db3-42ee-b6c8-73af19c52f2c" (UID: "104c7146-9db3-42ee-b6c8-73af19c52f2c"). InnerVolumeSpecName "kube-api-access-tl4nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.279610 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/104c7146-9db3-42ee-b6c8-73af19c52f2c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "104c7146-9db3-42ee-b6c8-73af19c52f2c" (UID: "104c7146-9db3-42ee-b6c8-73af19c52f2c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.357626 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/104c7146-9db3-42ee-b6c8-73af19c52f2c-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.357664 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl4nf\" (UniqueName: \"kubernetes.io/projected/104c7146-9db3-42ee-b6c8-73af19c52f2c-kube-api-access-tl4nf\") on node \"crc\" DevicePath \"\"" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.357675 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/104c7146-9db3-42ee-b6c8-73af19c52f2c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.664223 4768 generic.go:334] "Generic (PLEG): container finished" podID="104c7146-9db3-42ee-b6c8-73af19c52f2c" containerID="f3a32cc00b997445b4cd9113b871519e51c0a20a18fc20df2917ec6279217536" exitCode=0 Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.664429 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xddbj" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.664471 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xddbj" event={"ID":"104c7146-9db3-42ee-b6c8-73af19c52f2c","Type":"ContainerDied","Data":"f3a32cc00b997445b4cd9113b871519e51c0a20a18fc20df2917ec6279217536"} Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.664580 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xddbj" event={"ID":"104c7146-9db3-42ee-b6c8-73af19c52f2c","Type":"ContainerDied","Data":"c1bf36bb3ff95d58ab06ef4e06286858c3ad206a35d0d2b3b9011e40fcf4b417"} Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.664625 4768 scope.go:117] "RemoveContainer" containerID="f3a32cc00b997445b4cd9113b871519e51c0a20a18fc20df2917ec6279217536" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.735459 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xddbj"] Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.739272 4768 scope.go:117] "RemoveContainer" containerID="391ed998d6d9ca4b44dd2fb84972671e1efe1053ca8dfe8fc100821494c3fb7a" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.773010 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xddbj"] Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.797786 4768 scope.go:117] "RemoveContainer" containerID="1e08694b0779b2ae3b2e77a1728ea0be0d1a0c6fd5d9c08ae3a3e9cc5751376b" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.832095 4768 scope.go:117] "RemoveContainer" containerID="f3a32cc00b997445b4cd9113b871519e51c0a20a18fc20df2917ec6279217536" Feb 23 19:43:31 crc kubenswrapper[4768]: E0223 19:43:31.832525 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a32cc00b997445b4cd9113b871519e51c0a20a18fc20df2917ec6279217536\": container with ID starting with f3a32cc00b997445b4cd9113b871519e51c0a20a18fc20df2917ec6279217536 not found: ID does not exist" containerID="f3a32cc00b997445b4cd9113b871519e51c0a20a18fc20df2917ec6279217536" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.832566 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a32cc00b997445b4cd9113b871519e51c0a20a18fc20df2917ec6279217536"} err="failed to get container status \"f3a32cc00b997445b4cd9113b871519e51c0a20a18fc20df2917ec6279217536\": rpc error: code = NotFound desc = could not find container \"f3a32cc00b997445b4cd9113b871519e51c0a20a18fc20df2917ec6279217536\": container with ID starting with f3a32cc00b997445b4cd9113b871519e51c0a20a18fc20df2917ec6279217536 not found: ID does not exist" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.832591 4768 scope.go:117] "RemoveContainer" containerID="391ed998d6d9ca4b44dd2fb84972671e1efe1053ca8dfe8fc100821494c3fb7a" Feb 23 19:43:31 crc kubenswrapper[4768]: E0223 19:43:31.833457 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"391ed998d6d9ca4b44dd2fb84972671e1efe1053ca8dfe8fc100821494c3fb7a\": container with ID starting with 391ed998d6d9ca4b44dd2fb84972671e1efe1053ca8dfe8fc100821494c3fb7a not found: ID does not exist" containerID="391ed998d6d9ca4b44dd2fb84972671e1efe1053ca8dfe8fc100821494c3fb7a" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.833478 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"391ed998d6d9ca4b44dd2fb84972671e1efe1053ca8dfe8fc100821494c3fb7a"} err="failed to get container status \"391ed998d6d9ca4b44dd2fb84972671e1efe1053ca8dfe8fc100821494c3fb7a\": rpc error: code = NotFound desc = could not find container \"391ed998d6d9ca4b44dd2fb84972671e1efe1053ca8dfe8fc100821494c3fb7a\": container with ID starting with 391ed998d6d9ca4b44dd2fb84972671e1efe1053ca8dfe8fc100821494c3fb7a not found: ID does not exist" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.833509 4768 scope.go:117] "RemoveContainer" containerID="1e08694b0779b2ae3b2e77a1728ea0be0d1a0c6fd5d9c08ae3a3e9cc5751376b" Feb 23 19:43:31 crc kubenswrapper[4768]: E0223 19:43:31.833841 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e08694b0779b2ae3b2e77a1728ea0be0d1a0c6fd5d9c08ae3a3e9cc5751376b\": container with ID starting with 1e08694b0779b2ae3b2e77a1728ea0be0d1a0c6fd5d9c08ae3a3e9cc5751376b not found: ID does not exist" containerID="1e08694b0779b2ae3b2e77a1728ea0be0d1a0c6fd5d9c08ae3a3e9cc5751376b" Feb 23 19:43:31 crc kubenswrapper[4768]: I0223 19:43:31.833881 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e08694b0779b2ae3b2e77a1728ea0be0d1a0c6fd5d9c08ae3a3e9cc5751376b"} err="failed to get container status \"1e08694b0779b2ae3b2e77a1728ea0be0d1a0c6fd5d9c08ae3a3e9cc5751376b\": rpc error: code = NotFound desc = could not find container \"1e08694b0779b2ae3b2e77a1728ea0be0d1a0c6fd5d9c08ae3a3e9cc5751376b\": container with ID starting with 1e08694b0779b2ae3b2e77a1728ea0be0d1a0c6fd5d9c08ae3a3e9cc5751376b not found: ID does not exist" Feb 23 19:43:33 crc kubenswrapper[4768]: I0223 19:43:33.327432 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="104c7146-9db3-42ee-b6c8-73af19c52f2c" path="/var/lib/kubelet/pods/104c7146-9db3-42ee-b6c8-73af19c52f2c/volumes" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.418222 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kq728"] Feb 23 19:43:34 crc kubenswrapper[4768]: E0223 19:43:34.419090 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104c7146-9db3-42ee-b6c8-73af19c52f2c" containerName="registry-server" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.419106 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="104c7146-9db3-42ee-b6c8-73af19c52f2c" containerName="registry-server" Feb 23 19:43:34 crc kubenswrapper[4768]: E0223 19:43:34.419152 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104c7146-9db3-42ee-b6c8-73af19c52f2c" containerName="extract-utilities" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.419160 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="104c7146-9db3-42ee-b6c8-73af19c52f2c" containerName="extract-utilities" Feb 23 19:43:34 crc kubenswrapper[4768]: E0223 19:43:34.419180 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="104c7146-9db3-42ee-b6c8-73af19c52f2c" containerName="extract-content" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.419189 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="104c7146-9db3-42ee-b6c8-73af19c52f2c" containerName="extract-content" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.419439 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="104c7146-9db3-42ee-b6c8-73af19c52f2c" containerName="registry-server" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.426290 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.431518 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kq728"] Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.530166 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4cccac-3be7-4b63-bfa1-0d66887f9d65-catalog-content\") pod \"redhat-operators-kq728\" (UID: \"3b4cccac-3be7-4b63-bfa1-0d66887f9d65\") " pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.530298 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4cccac-3be7-4b63-bfa1-0d66887f9d65-utilities\") pod \"redhat-operators-kq728\" (UID: \"3b4cccac-3be7-4b63-bfa1-0d66887f9d65\") " pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.530370 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62dcl\" (UniqueName: \"kubernetes.io/projected/3b4cccac-3be7-4b63-bfa1-0d66887f9d65-kube-api-access-62dcl\") pod \"redhat-operators-kq728\" (UID: \"3b4cccac-3be7-4b63-bfa1-0d66887f9d65\") " pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.631921 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4cccac-3be7-4b63-bfa1-0d66887f9d65-catalog-content\") pod \"redhat-operators-kq728\" (UID: \"3b4cccac-3be7-4b63-bfa1-0d66887f9d65\") " pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.632022 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4cccac-3be7-4b63-bfa1-0d66887f9d65-utilities\") pod \"redhat-operators-kq728\" (UID: \"3b4cccac-3be7-4b63-bfa1-0d66887f9d65\") " pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.632073 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62dcl\" (UniqueName: \"kubernetes.io/projected/3b4cccac-3be7-4b63-bfa1-0d66887f9d65-kube-api-access-62dcl\") pod \"redhat-operators-kq728\" (UID: \"3b4cccac-3be7-4b63-bfa1-0d66887f9d65\") " pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.632815 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b4cccac-3be7-4b63-bfa1-0d66887f9d65-catalog-content\") pod \"redhat-operators-kq728\" (UID: \"3b4cccac-3be7-4b63-bfa1-0d66887f9d65\") " pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.633038 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b4cccac-3be7-4b63-bfa1-0d66887f9d65-utilities\") pod \"redhat-operators-kq728\" (UID: \"3b4cccac-3be7-4b63-bfa1-0d66887f9d65\") " pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.655077 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62dcl\" (UniqueName: \"kubernetes.io/projected/3b4cccac-3be7-4b63-bfa1-0d66887f9d65-kube-api-access-62dcl\") pod \"redhat-operators-kq728\" (UID: \"3b4cccac-3be7-4b63-bfa1-0d66887f9d65\") " pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:43:34 crc kubenswrapper[4768]: I0223 19:43:34.772046 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:43:35 crc kubenswrapper[4768]: I0223 19:43:35.208838 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kq728"] Feb 23 19:43:35 crc kubenswrapper[4768]: I0223 19:43:35.698738 4768 generic.go:334] "Generic (PLEG): container finished" podID="3b4cccac-3be7-4b63-bfa1-0d66887f9d65" containerID="f0043cd84a955d7e004ca8249622910e63e26fa69e9b69009b353f0d086d5728" exitCode=0 Feb 23 19:43:35 crc kubenswrapper[4768]: I0223 19:43:35.698807 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kq728" event={"ID":"3b4cccac-3be7-4b63-bfa1-0d66887f9d65","Type":"ContainerDied","Data":"f0043cd84a955d7e004ca8249622910e63e26fa69e9b69009b353f0d086d5728"} Feb 23 19:43:35 crc kubenswrapper[4768]: I0223 19:43:35.698978 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kq728" event={"ID":"3b4cccac-3be7-4b63-bfa1-0d66887f9d65","Type":"ContainerStarted","Data":"ad28c11723afcb26764f0133f703a3bff050f7d12c991a8e85e7778a327534fb"} Feb 23 19:43:48 crc kubenswrapper[4768]: I0223 19:43:48.842542 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kq728" event={"ID":"3b4cccac-3be7-4b63-bfa1-0d66887f9d65","Type":"ContainerStarted","Data":"5bbea154ac05a3e60c9ca286ef6a1661958377c9fc0b18c1ba28263c2ecf4dbd"} Feb 23 19:43:48 crc kubenswrapper[4768]: I0223 19:43:48.845924 4768 generic.go:334] "Generic (PLEG): container finished" podID="7ef14749-73b8-4e85-b19b-81633da7d903" containerID="f7346f3dc56c92437e38e7563c709a992334cf9dfd7eb4e39063fb258867caba" exitCode=0 Feb 23 19:43:48 crc kubenswrapper[4768]: I0223 19:43:48.845962 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sbhb5/must-gather-bbthh" event={"ID":"7ef14749-73b8-4e85-b19b-81633da7d903","Type":"ContainerDied","Data":"f7346f3dc56c92437e38e7563c709a992334cf9dfd7eb4e39063fb258867caba"} Feb 23 19:43:48 crc kubenswrapper[4768]: I0223 19:43:48.846346 4768 scope.go:117] "RemoveContainer" containerID="f7346f3dc56c92437e38e7563c709a992334cf9dfd7eb4e39063fb258867caba" Feb 23 19:43:49 crc kubenswrapper[4768]: I0223 19:43:49.645522 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sbhb5_must-gather-bbthh_7ef14749-73b8-4e85-b19b-81633da7d903/gather/0.log" Feb 23 19:43:49 crc kubenswrapper[4768]: I0223 19:43:49.862751 4768 generic.go:334] "Generic (PLEG): container finished" podID="3b4cccac-3be7-4b63-bfa1-0d66887f9d65" containerID="5bbea154ac05a3e60c9ca286ef6a1661958377c9fc0b18c1ba28263c2ecf4dbd" exitCode=0 Feb 23 19:43:49 crc kubenswrapper[4768]: I0223 19:43:49.863660 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kq728" event={"ID":"3b4cccac-3be7-4b63-bfa1-0d66887f9d65","Type":"ContainerDied","Data":"5bbea154ac05a3e60c9ca286ef6a1661958377c9fc0b18c1ba28263c2ecf4dbd"} Feb 23 19:43:50 crc kubenswrapper[4768]: I0223 19:43:50.881740 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kq728" event={"ID":"3b4cccac-3be7-4b63-bfa1-0d66887f9d65","Type":"ContainerStarted","Data":"20c756b267ee6975007d640c4e5984c6aa33a94a9f296b716fc11940a31d3cb5"} Feb 23 19:43:50 crc kubenswrapper[4768]: I0223 19:43:50.916867 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kq728" podStartSLOduration=2.332875317 podStartE2EDuration="16.916849063s" podCreationTimestamp="2026-02-23 19:43:34 +0000 UTC" firstStartedPulling="2026-02-23 19:43:35.700954686 +0000 UTC m=+4211.091440486" lastFinishedPulling="2026-02-23 19:43:50.284928432 +0000 UTC m=+4225.675414232" observedRunningTime="2026-02-23 19:43:50.907536481 +0000 UTC m=+4226.298022281" watchObservedRunningTime="2026-02-23 19:43:50.916849063 +0000 UTC m=+4226.307334863" Feb 23 19:43:54 crc kubenswrapper[4768]: I0223 19:43:54.772602 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:43:54 crc kubenswrapper[4768]: I0223 19:43:54.772979 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:43:55 crc kubenswrapper[4768]: I0223 19:43:55.854804 4768 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kq728" podUID="3b4cccac-3be7-4b63-bfa1-0d66887f9d65" containerName="registry-server" probeResult="failure" output=< Feb 23 19:43:55 crc kubenswrapper[4768]: timeout: failed to connect service ":50051" within 1s Feb 23 19:43:55 crc kubenswrapper[4768]: > Feb 23 19:44:01 crc kubenswrapper[4768]: I0223 19:44:01.819344 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sbhb5/must-gather-bbthh"] Feb 23 19:44:01 crc kubenswrapper[4768]: I0223 19:44:01.820081 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-sbhb5/must-gather-bbthh" podUID="7ef14749-73b8-4e85-b19b-81633da7d903" containerName="copy" containerID="cri-o://994b96a3cce5c8e63ffdd79dc036e41dca233268e6245da455da174f28487719" gracePeriod=2 Feb 23 19:44:01 crc kubenswrapper[4768]: I0223 19:44:01.831538 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sbhb5/must-gather-bbthh"] Feb 23 19:44:02 crc kubenswrapper[4768]: I0223 19:44:02.006904 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sbhb5_must-gather-bbthh_7ef14749-73b8-4e85-b19b-81633da7d903/copy/0.log" Feb 23 19:44:02 crc kubenswrapper[4768]: I0223 19:44:02.008909 4768 generic.go:334] "Generic (PLEG): container finished" podID="7ef14749-73b8-4e85-b19b-81633da7d903" containerID="994b96a3cce5c8e63ffdd79dc036e41dca233268e6245da455da174f28487719" exitCode=143 Feb 23 19:44:02 crc kubenswrapper[4768]: I0223 19:44:02.276376 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sbhb5_must-gather-bbthh_7ef14749-73b8-4e85-b19b-81633da7d903/copy/0.log" Feb 23 19:44:02 crc kubenswrapper[4768]: I0223 19:44:02.276742 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/must-gather-bbthh" Feb 23 19:44:02 crc kubenswrapper[4768]: I0223 19:44:02.394149 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7ef14749-73b8-4e85-b19b-81633da7d903-must-gather-output\") pod \"7ef14749-73b8-4e85-b19b-81633da7d903\" (UID: \"7ef14749-73b8-4e85-b19b-81633da7d903\") " Feb 23 19:44:02 crc kubenswrapper[4768]: I0223 19:44:02.394390 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmxh4\" (UniqueName: \"kubernetes.io/projected/7ef14749-73b8-4e85-b19b-81633da7d903-kube-api-access-kmxh4\") pod \"7ef14749-73b8-4e85-b19b-81633da7d903\" (UID: \"7ef14749-73b8-4e85-b19b-81633da7d903\") " Feb 23 19:44:02 crc kubenswrapper[4768]: I0223 19:44:02.404289 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef14749-73b8-4e85-b19b-81633da7d903-kube-api-access-kmxh4" (OuterVolumeSpecName: "kube-api-access-kmxh4") pod "7ef14749-73b8-4e85-b19b-81633da7d903" (UID: "7ef14749-73b8-4e85-b19b-81633da7d903"). InnerVolumeSpecName "kube-api-access-kmxh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:44:02 crc kubenswrapper[4768]: I0223 19:44:02.498065 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmxh4\" (UniqueName: \"kubernetes.io/projected/7ef14749-73b8-4e85-b19b-81633da7d903-kube-api-access-kmxh4\") on node \"crc\" DevicePath \"\"" Feb 23 19:44:02 crc kubenswrapper[4768]: I0223 19:44:02.561961 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ef14749-73b8-4e85-b19b-81633da7d903-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7ef14749-73b8-4e85-b19b-81633da7d903" (UID: "7ef14749-73b8-4e85-b19b-81633da7d903"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:44:02 crc kubenswrapper[4768]: I0223 19:44:02.600438 4768 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7ef14749-73b8-4e85-b19b-81633da7d903-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 23 19:44:03 crc kubenswrapper[4768]: I0223 19:44:03.039423 4768 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sbhb5_must-gather-bbthh_7ef14749-73b8-4e85-b19b-81633da7d903/copy/0.log" Feb 23 19:44:03 crc kubenswrapper[4768]: I0223 19:44:03.039902 4768 scope.go:117] "RemoveContainer" containerID="994b96a3cce5c8e63ffdd79dc036e41dca233268e6245da455da174f28487719" Feb 23 19:44:03 crc kubenswrapper[4768]: I0223 19:44:03.039928 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sbhb5/must-gather-bbthh" Feb 23 19:44:03 crc kubenswrapper[4768]: I0223 19:44:03.070231 4768 scope.go:117] "RemoveContainer" containerID="f7346f3dc56c92437e38e7563c709a992334cf9dfd7eb4e39063fb258867caba" Feb 23 19:44:03 crc kubenswrapper[4768]: I0223 19:44:03.318416 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ef14749-73b8-4e85-b19b-81633da7d903" path="/var/lib/kubelet/pods/7ef14749-73b8-4e85-b19b-81633da7d903/volumes" Feb 23 19:44:04 crc kubenswrapper[4768]: I0223 19:44:04.838427 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:44:04 crc kubenswrapper[4768]: I0223 19:44:04.890113 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kq728" Feb 23 19:44:05 crc kubenswrapper[4768]: I0223 19:44:05.424195 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kq728"] Feb 23 19:44:05 crc kubenswrapper[4768]: I0223 19:44:05.607403 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5m2c"] Feb 23 19:44:05 crc kubenswrapper[4768]: I0223 19:44:05.607709 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z5m2c" podUID="03532675-9efc-4d5c-ae55-5c9e1d240346" containerName="registry-server" containerID="cri-o://29227f0d0451b267458252036338e0ee868916056c713bb860f4b14a3718a664" gracePeriod=2 Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.080451 4768 generic.go:334] "Generic (PLEG): container finished" podID="03532675-9efc-4d5c-ae55-5c9e1d240346" containerID="29227f0d0451b267458252036338e0ee868916056c713bb860f4b14a3718a664" exitCode=0 Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.081357 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5m2c" event={"ID":"03532675-9efc-4d5c-ae55-5c9e1d240346","Type":"ContainerDied","Data":"29227f0d0451b267458252036338e0ee868916056c713bb860f4b14a3718a664"} Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.081405 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5m2c" event={"ID":"03532675-9efc-4d5c-ae55-5c9e1d240346","Type":"ContainerDied","Data":"3cef8207ab511587442b2e5101e57644961fd9b405e3f270de9377b694b264e9"} Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.081415 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cef8207ab511587442b2e5101e57644961fd9b405e3f270de9377b694b264e9" Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.095940 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.274647 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03532675-9efc-4d5c-ae55-5c9e1d240346-catalog-content\") pod \"03532675-9efc-4d5c-ae55-5c9e1d240346\" (UID: \"03532675-9efc-4d5c-ae55-5c9e1d240346\") " Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.274999 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03532675-9efc-4d5c-ae55-5c9e1d240346-utilities\") pod \"03532675-9efc-4d5c-ae55-5c9e1d240346\" (UID: \"03532675-9efc-4d5c-ae55-5c9e1d240346\") " Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.275295 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlgjw\" (UniqueName: \"kubernetes.io/projected/03532675-9efc-4d5c-ae55-5c9e1d240346-kube-api-access-wlgjw\") pod \"03532675-9efc-4d5c-ae55-5c9e1d240346\" (UID: \"03532675-9efc-4d5c-ae55-5c9e1d240346\") " Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.277529 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03532675-9efc-4d5c-ae55-5c9e1d240346-utilities" (OuterVolumeSpecName: "utilities") pod "03532675-9efc-4d5c-ae55-5c9e1d240346" (UID: "03532675-9efc-4d5c-ae55-5c9e1d240346"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.288031 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03532675-9efc-4d5c-ae55-5c9e1d240346-kube-api-access-wlgjw" (OuterVolumeSpecName: "kube-api-access-wlgjw") pod "03532675-9efc-4d5c-ae55-5c9e1d240346" (UID: "03532675-9efc-4d5c-ae55-5c9e1d240346"). InnerVolumeSpecName "kube-api-access-wlgjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.377461 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlgjw\" (UniqueName: \"kubernetes.io/projected/03532675-9efc-4d5c-ae55-5c9e1d240346-kube-api-access-wlgjw\") on node \"crc\" DevicePath \"\"" Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.377490 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03532675-9efc-4d5c-ae55-5c9e1d240346-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.410414 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03532675-9efc-4d5c-ae55-5c9e1d240346-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03532675-9efc-4d5c-ae55-5c9e1d240346" (UID: "03532675-9efc-4d5c-ae55-5c9e1d240346"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:44:06 crc kubenswrapper[4768]: I0223 19:44:06.479394 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03532675-9efc-4d5c-ae55-5c9e1d240346-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:44:07 crc kubenswrapper[4768]: I0223 19:44:07.088134 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5m2c" Feb 23 19:44:07 crc kubenswrapper[4768]: I0223 19:44:07.123967 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5m2c"] Feb 23 19:44:07 crc kubenswrapper[4768]: I0223 19:44:07.131678 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z5m2c"] Feb 23 19:44:07 crc kubenswrapper[4768]: I0223 19:44:07.319572 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03532675-9efc-4d5c-ae55-5c9e1d240346" path="/var/lib/kubelet/pods/03532675-9efc-4d5c-ae55-5c9e1d240346/volumes" Feb 23 19:44:39 crc kubenswrapper[4768]: I0223 19:44:39.545127 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:44:39 crc kubenswrapper[4768]: I0223 19:44:39.545770 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:44:47 crc kubenswrapper[4768]: I0223 19:44:47.639156 4768 scope.go:117] "RemoveContainer" containerID="9b19f4c5d65ba834a090b5bf5a0feb2a1886e2708dddf539743d2f83b4b48a96" Feb 23 19:44:47 crc kubenswrapper[4768]: I0223 19:44:47.694968 4768 scope.go:117] "RemoveContainer" containerID="64e77c3e9416608b2edd38c9eaba38dccb3e1499cee3ef0b8dfd4fc4ae8aa706" Feb 23 19:44:47 crc kubenswrapper[4768]: I0223 19:44:47.750087 4768 scope.go:117] "RemoveContainer" containerID="29227f0d0451b267458252036338e0ee868916056c713bb860f4b14a3718a664" Feb 23 19:44:47 crc kubenswrapper[4768]: I0223 19:44:47.798658 4768 scope.go:117] "RemoveContainer" containerID="b7b946954edbb096a8c8c07a9a71f70dd0c11ad6a34fee5be327e2776c1e908c" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.207277 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv"] Feb 23 19:45:00 crc kubenswrapper[4768]: E0223 19:45:00.208155 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03532675-9efc-4d5c-ae55-5c9e1d240346" containerName="extract-utilities" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.208169 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="03532675-9efc-4d5c-ae55-5c9e1d240346" containerName="extract-utilities" Feb 23 19:45:00 crc kubenswrapper[4768]: E0223 19:45:00.208186 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef14749-73b8-4e85-b19b-81633da7d903" containerName="gather" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.208198 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef14749-73b8-4e85-b19b-81633da7d903" containerName="gather" Feb 23 19:45:00 crc kubenswrapper[4768]: E0223 19:45:00.208210 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef14749-73b8-4e85-b19b-81633da7d903" containerName="copy" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.208219 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef14749-73b8-4e85-b19b-81633da7d903" containerName="copy" Feb 23 19:45:00 crc kubenswrapper[4768]: E0223 19:45:00.208303 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03532675-9efc-4d5c-ae55-5c9e1d240346" containerName="registry-server" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.208317 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="03532675-9efc-4d5c-ae55-5c9e1d240346" containerName="registry-server" Feb 23 19:45:00 crc kubenswrapper[4768]: E0223 19:45:00.208343 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03532675-9efc-4d5c-ae55-5c9e1d240346" containerName="extract-content" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.208354 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="03532675-9efc-4d5c-ae55-5c9e1d240346" containerName="extract-content" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.208574 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef14749-73b8-4e85-b19b-81633da7d903" containerName="gather" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.208591 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef14749-73b8-4e85-b19b-81633da7d903" containerName="copy" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.208623 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="03532675-9efc-4d5c-ae55-5c9e1d240346" containerName="registry-server" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.209298 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.212651 4768 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.220946 4768 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.244482 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv"] Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.338303 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfbjp\" (UniqueName: \"kubernetes.io/projected/f6151edd-7760-4838-84cf-e6a01c0aa8dd-kube-api-access-dfbjp\") pod \"collect-profiles-29531265-fnccv\" (UID: \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.338891 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6151edd-7760-4838-84cf-e6a01c0aa8dd-config-volume\") pod \"collect-profiles-29531265-fnccv\" (UID: \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.338949 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6151edd-7760-4838-84cf-e6a01c0aa8dd-secret-volume\") pod \"collect-profiles-29531265-fnccv\" (UID: \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.440859 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfbjp\" (UniqueName: \"kubernetes.io/projected/f6151edd-7760-4838-84cf-e6a01c0aa8dd-kube-api-access-dfbjp\") pod \"collect-profiles-29531265-fnccv\" (UID: \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.441233 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6151edd-7760-4838-84cf-e6a01c0aa8dd-config-volume\") pod \"collect-profiles-29531265-fnccv\" (UID: \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.441325 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6151edd-7760-4838-84cf-e6a01c0aa8dd-secret-volume\") pod \"collect-profiles-29531265-fnccv\" (UID: \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.443031 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6151edd-7760-4838-84cf-e6a01c0aa8dd-config-volume\") pod \"collect-profiles-29531265-fnccv\" (UID: \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.460845 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6151edd-7760-4838-84cf-e6a01c0aa8dd-secret-volume\") pod \"collect-profiles-29531265-fnccv\" (UID: \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.474793 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfbjp\" (UniqueName: \"kubernetes.io/projected/f6151edd-7760-4838-84cf-e6a01c0aa8dd-kube-api-access-dfbjp\") pod \"collect-profiles-29531265-fnccv\" (UID: \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" Feb 23 19:45:00 crc kubenswrapper[4768]: I0223 19:45:00.531903 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" Feb 23 19:45:01 crc kubenswrapper[4768]: I0223 19:45:01.068772 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv"] Feb 23 19:45:02 crc kubenswrapper[4768]: I0223 19:45:02.038869 4768 generic.go:334] "Generic (PLEG): container finished" podID="f6151edd-7760-4838-84cf-e6a01c0aa8dd" containerID="f224b1db3e4b7bfa5a05abe1b55158998c357e1f0bd3ad78a8b30e59537a255c" exitCode=0 Feb 23 19:45:02 crc kubenswrapper[4768]: I0223 19:45:02.038965 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" event={"ID":"f6151edd-7760-4838-84cf-e6a01c0aa8dd","Type":"ContainerDied","Data":"f224b1db3e4b7bfa5a05abe1b55158998c357e1f0bd3ad78a8b30e59537a255c"} Feb 23 19:45:02 crc kubenswrapper[4768]: I0223 19:45:02.039450 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" event={"ID":"f6151edd-7760-4838-84cf-e6a01c0aa8dd","Type":"ContainerStarted","Data":"ea68eb1236d73c79997d6e961befa2fdd04c6d0ee37842059c15be7918d0de39"} Feb 23 19:45:03 crc kubenswrapper[4768]: I0223 19:45:03.378522 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" Feb 23 19:45:03 crc kubenswrapper[4768]: I0223 19:45:03.511359 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6151edd-7760-4838-84cf-e6a01c0aa8dd-config-volume\") pod \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\" (UID: \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\") " Feb 23 19:45:03 crc kubenswrapper[4768]: I0223 19:45:03.511688 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6151edd-7760-4838-84cf-e6a01c0aa8dd-secret-volume\") pod \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\" (UID: \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\") " Feb 23 19:45:03 crc kubenswrapper[4768]: I0223 19:45:03.511755 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfbjp\" (UniqueName: \"kubernetes.io/projected/f6151edd-7760-4838-84cf-e6a01c0aa8dd-kube-api-access-dfbjp\") pod \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\" (UID: \"f6151edd-7760-4838-84cf-e6a01c0aa8dd\") " Feb 23 19:45:03 crc kubenswrapper[4768]: I0223 19:45:03.512206 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6151edd-7760-4838-84cf-e6a01c0aa8dd-config-volume" (OuterVolumeSpecName: "config-volume") pod "f6151edd-7760-4838-84cf-e6a01c0aa8dd" (UID: "f6151edd-7760-4838-84cf-e6a01c0aa8dd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 23 19:45:03 crc kubenswrapper[4768]: I0223 19:45:03.517731 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6151edd-7760-4838-84cf-e6a01c0aa8dd-kube-api-access-dfbjp" (OuterVolumeSpecName: "kube-api-access-dfbjp") pod "f6151edd-7760-4838-84cf-e6a01c0aa8dd" (UID: "f6151edd-7760-4838-84cf-e6a01c0aa8dd"). InnerVolumeSpecName "kube-api-access-dfbjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:45:03 crc kubenswrapper[4768]: I0223 19:45:03.518015 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6151edd-7760-4838-84cf-e6a01c0aa8dd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f6151edd-7760-4838-84cf-e6a01c0aa8dd" (UID: "f6151edd-7760-4838-84cf-e6a01c0aa8dd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 23 19:45:03 crc kubenswrapper[4768]: I0223 19:45:03.614124 4768 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6151edd-7760-4838-84cf-e6a01c0aa8dd-config-volume\") on node \"crc\" DevicePath \"\"" Feb 23 19:45:03 crc kubenswrapper[4768]: I0223 19:45:03.614160 4768 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f6151edd-7760-4838-84cf-e6a01c0aa8dd-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 23 19:45:03 crc kubenswrapper[4768]: I0223 19:45:03.614171 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfbjp\" (UniqueName: \"kubernetes.io/projected/f6151edd-7760-4838-84cf-e6a01c0aa8dd-kube-api-access-dfbjp\") on node \"crc\" DevicePath \"\"" Feb 23 19:45:04 crc kubenswrapper[4768]: I0223 19:45:04.065563 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" event={"ID":"f6151edd-7760-4838-84cf-e6a01c0aa8dd","Type":"ContainerDied","Data":"ea68eb1236d73c79997d6e961befa2fdd04c6d0ee37842059c15be7918d0de39"} Feb 23 19:45:04 crc kubenswrapper[4768]: I0223 19:45:04.065625 4768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea68eb1236d73c79997d6e961befa2fdd04c6d0ee37842059c15be7918d0de39" Feb 23 19:45:04 crc kubenswrapper[4768]: I0223 19:45:04.065685 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29531265-fnccv" Feb 23 19:45:04 crc kubenswrapper[4768]: I0223 19:45:04.489924 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw"] Feb 23 19:45:04 crc kubenswrapper[4768]: I0223 19:45:04.500399 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29531220-prhtw"] Feb 23 19:45:05 crc kubenswrapper[4768]: I0223 19:45:05.331385 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb735541-cf3e-4a2a-afd4-05e9a11d0364" path="/var/lib/kubelet/pods/cb735541-cf3e-4a2a-afd4-05e9a11d0364/volumes" Feb 23 19:45:09 crc kubenswrapper[4768]: I0223 19:45:09.545231 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:45:09 crc kubenswrapper[4768]: I0223 19:45:09.545926 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:45:39 crc kubenswrapper[4768]: I0223 19:45:39.545087 4768 patch_prober.go:28] interesting pod/machine-config-daemon-zckb9 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 23 19:45:39 crc kubenswrapper[4768]: I0223 19:45:39.546061 4768 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 23 19:45:39 crc kubenswrapper[4768]: I0223 19:45:39.546152 4768 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" Feb 23 19:45:39 crc kubenswrapper[4768]: I0223 19:45:39.547418 4768 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4"} pod="openshift-machine-config-operator/machine-config-daemon-zckb9" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 23 19:45:39 crc kubenswrapper[4768]: I0223 19:45:39.547549 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerName="machine-config-daemon" containerID="cri-o://bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4" gracePeriod=600 Feb 23 19:45:39 crc kubenswrapper[4768]: E0223 19:45:39.676941 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:45:40 crc kubenswrapper[4768]: I0223 19:45:40.585620 4768 generic.go:334] "Generic (PLEG): container finished" podID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" containerID="bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4" exitCode=0 Feb 23 19:45:40 crc kubenswrapper[4768]: I0223 19:45:40.585676 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" event={"ID":"ed422723-0e38-45bc-a0d9-c4c51d3f2dc7","Type":"ContainerDied","Data":"bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4"} Feb 23 19:45:40 crc kubenswrapper[4768]: I0223 19:45:40.586067 4768 scope.go:117] "RemoveContainer" containerID="cca6db9e075b2de225c4d2718ad25d00082b18220d57d4c9c143801aea4dacae" Feb 23 19:45:40 crc kubenswrapper[4768]: I0223 19:45:40.586802 4768 scope.go:117] "RemoveContainer" containerID="bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4" Feb 23 19:45:40 crc kubenswrapper[4768]: E0223 19:45:40.587099 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:45:47 crc kubenswrapper[4768]: I0223 19:45:47.917319 4768 scope.go:117] "RemoveContainer" containerID="6c41a52d10b985ccb4667fa0792cf3a1076bb5608f35c8301addc75a936ce589" Feb 23 19:45:51 crc kubenswrapper[4768]: I0223 19:45:51.309114 4768 scope.go:117] "RemoveContainer" containerID="bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4" Feb 23 19:45:51 crc kubenswrapper[4768]: E0223 19:45:51.310187 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:46:03 crc kubenswrapper[4768]: I0223 19:46:03.309108 4768 scope.go:117] "RemoveContainer" containerID="bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4" Feb 23 19:46:03 crc kubenswrapper[4768]: E0223 19:46:03.310224 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:46:14 crc kubenswrapper[4768]: I0223 19:46:14.309319 4768 scope.go:117] "RemoveContainer" containerID="bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4" Feb 23 19:46:14 crc kubenswrapper[4768]: E0223 19:46:14.312928 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:46:26 crc kubenswrapper[4768]: I0223 19:46:26.307948 4768 scope.go:117] "RemoveContainer" containerID="bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4" Feb 23 19:46:26 crc kubenswrapper[4768]: E0223 19:46:26.308761 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.536621 4768 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q7kp8"] Feb 23 19:46:36 crc kubenswrapper[4768]: E0223 19:46:36.537827 4768 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6151edd-7760-4838-84cf-e6a01c0aa8dd" containerName="collect-profiles" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.537850 4768 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6151edd-7760-4838-84cf-e6a01c0aa8dd" containerName="collect-profiles" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.538171 4768 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6151edd-7760-4838-84cf-e6a01c0aa8dd" containerName="collect-profiles" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.539890 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.558744 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7kp8"] Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.727004 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-catalog-content\") pod \"certified-operators-q7kp8\" (UID: \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\") " pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.727188 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrh98\" (UniqueName: \"kubernetes.io/projected/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-kube-api-access-nrh98\") pod \"certified-operators-q7kp8\" (UID: \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\") " pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.727420 4768 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-utilities\") pod \"certified-operators-q7kp8\" (UID: \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\") " pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.829855 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-catalog-content\") pod \"certified-operators-q7kp8\" (UID: \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\") " pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.829956 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrh98\" (UniqueName: \"kubernetes.io/projected/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-kube-api-access-nrh98\") pod \"certified-operators-q7kp8\" (UID: \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\") " pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.830032 4768 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-utilities\") pod \"certified-operators-q7kp8\" (UID: \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\") " pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.830324 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-catalog-content\") pod \"certified-operators-q7kp8\" (UID: \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\") " pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.830592 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-utilities\") pod \"certified-operators-q7kp8\" (UID: \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\") " pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.851026 4768 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrh98\" (UniqueName: \"kubernetes.io/projected/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-kube-api-access-nrh98\") pod \"certified-operators-q7kp8\" (UID: \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\") " pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:36 crc kubenswrapper[4768]: I0223 19:46:36.871025 4768 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:37 crc kubenswrapper[4768]: I0223 19:46:37.392456 4768 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q7kp8"] Feb 23 19:46:38 crc kubenswrapper[4768]: I0223 19:46:38.302747 4768 generic.go:334] "Generic (PLEG): container finished" podID="7fa525f5-efb8-4eee-b8d0-e58ba9263c38" containerID="2cb4c28de13aa896e5363b20aa4dec3ce331d8a87ca4f8f64d26399294ceee5b" exitCode=0 Feb 23 19:46:38 crc kubenswrapper[4768]: I0223 19:46:38.302812 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7kp8" event={"ID":"7fa525f5-efb8-4eee-b8d0-e58ba9263c38","Type":"ContainerDied","Data":"2cb4c28de13aa896e5363b20aa4dec3ce331d8a87ca4f8f64d26399294ceee5b"} Feb 23 19:46:38 crc kubenswrapper[4768]: I0223 19:46:38.303224 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7kp8" event={"ID":"7fa525f5-efb8-4eee-b8d0-e58ba9263c38","Type":"ContainerStarted","Data":"a56554d0bdfdd6b6f87f290168974155a75e8450650c35555988f4fe68977e67"} Feb 23 19:46:38 crc kubenswrapper[4768]: I0223 19:46:38.307756 4768 scope.go:117] "RemoveContainer" containerID="bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4" Feb 23 19:46:38 crc kubenswrapper[4768]: E0223 19:46:38.308307 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:46:39 crc kubenswrapper[4768]: I0223 19:46:39.332901 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7kp8" event={"ID":"7fa525f5-efb8-4eee-b8d0-e58ba9263c38","Type":"ContainerStarted","Data":"c3ffe3eb3f3209e564e9197e44729632086a88fdabdccda1ab970d8dcf1fcf25"} Feb 23 19:46:40 crc kubenswrapper[4768]: I0223 19:46:40.329180 4768 generic.go:334] "Generic (PLEG): container finished" podID="7fa525f5-efb8-4eee-b8d0-e58ba9263c38" containerID="c3ffe3eb3f3209e564e9197e44729632086a88fdabdccda1ab970d8dcf1fcf25" exitCode=0 Feb 23 19:46:40 crc kubenswrapper[4768]: I0223 19:46:40.329237 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7kp8" event={"ID":"7fa525f5-efb8-4eee-b8d0-e58ba9263c38","Type":"ContainerDied","Data":"c3ffe3eb3f3209e564e9197e44729632086a88fdabdccda1ab970d8dcf1fcf25"} Feb 23 19:46:41 crc kubenswrapper[4768]: I0223 19:46:41.342592 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7kp8" event={"ID":"7fa525f5-efb8-4eee-b8d0-e58ba9263c38","Type":"ContainerStarted","Data":"2fc84baa4d4756a79da2fde2a0c857acfa9a82253568b40c486332ef95074506"} Feb 23 19:46:41 crc kubenswrapper[4768]: I0223 19:46:41.366370 4768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q7kp8" podStartSLOduration=2.870235976 podStartE2EDuration="5.366319375s" podCreationTimestamp="2026-02-23 19:46:36 +0000 UTC" firstStartedPulling="2026-02-23 19:46:38.307626184 +0000 UTC m=+4393.698112024" lastFinishedPulling="2026-02-23 19:46:40.803709623 +0000 UTC m=+4396.194195423" observedRunningTime="2026-02-23 19:46:41.361196337 +0000 UTC m=+4396.751682147" watchObservedRunningTime="2026-02-23 19:46:41.366319375 +0000 UTC m=+4396.756805175" Feb 23 19:46:46 crc kubenswrapper[4768]: I0223 19:46:46.872162 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:46 crc kubenswrapper[4768]: I0223 19:46:46.873403 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:46 crc kubenswrapper[4768]: I0223 19:46:46.937348 4768 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:47 crc kubenswrapper[4768]: I0223 19:46:47.477346 4768 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:47 crc kubenswrapper[4768]: I0223 19:46:47.535626 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7kp8"] Feb 23 19:46:49 crc kubenswrapper[4768]: I0223 19:46:49.430969 4768 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q7kp8" podUID="7fa525f5-efb8-4eee-b8d0-e58ba9263c38" containerName="registry-server" containerID="cri-o://2fc84baa4d4756a79da2fde2a0c857acfa9a82253568b40c486332ef95074506" gracePeriod=2 Feb 23 19:46:49 crc kubenswrapper[4768]: I0223 19:46:49.946721 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.016052 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-catalog-content\") pod \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\" (UID: \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\") " Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.016132 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrh98\" (UniqueName: \"kubernetes.io/projected/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-kube-api-access-nrh98\") pod \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\" (UID: \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\") " Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.016390 4768 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-utilities\") pod \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\" (UID: \"7fa525f5-efb8-4eee-b8d0-e58ba9263c38\") " Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.017440 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-utilities" (OuterVolumeSpecName: "utilities") pod "7fa525f5-efb8-4eee-b8d0-e58ba9263c38" (UID: "7fa525f5-efb8-4eee-b8d0-e58ba9263c38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.017662 4768 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-utilities\") on node \"crc\" DevicePath \"\"" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.024147 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-kube-api-access-nrh98" (OuterVolumeSpecName: "kube-api-access-nrh98") pod "7fa525f5-efb8-4eee-b8d0-e58ba9263c38" (UID: "7fa525f5-efb8-4eee-b8d0-e58ba9263c38"). InnerVolumeSpecName "kube-api-access-nrh98". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.120045 4768 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrh98\" (UniqueName: \"kubernetes.io/projected/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-kube-api-access-nrh98\") on node \"crc\" DevicePath \"\"" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.287411 4768 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fa525f5-efb8-4eee-b8d0-e58ba9263c38" (UID: "7fa525f5-efb8-4eee-b8d0-e58ba9263c38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.309048 4768 scope.go:117] "RemoveContainer" containerID="bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4" Feb 23 19:46:50 crc kubenswrapper[4768]: E0223 19:46:50.309622 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.324893 4768 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fa525f5-efb8-4eee-b8d0-e58ba9263c38-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.444867 4768 generic.go:334] "Generic (PLEG): container finished" podID="7fa525f5-efb8-4eee-b8d0-e58ba9263c38" containerID="2fc84baa4d4756a79da2fde2a0c857acfa9a82253568b40c486332ef95074506" exitCode=0 Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.444994 4768 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q7kp8" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.445003 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7kp8" event={"ID":"7fa525f5-efb8-4eee-b8d0-e58ba9263c38","Type":"ContainerDied","Data":"2fc84baa4d4756a79da2fde2a0c857acfa9a82253568b40c486332ef95074506"} Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.446117 4768 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q7kp8" event={"ID":"7fa525f5-efb8-4eee-b8d0-e58ba9263c38","Type":"ContainerDied","Data":"a56554d0bdfdd6b6f87f290168974155a75e8450650c35555988f4fe68977e67"} Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.446156 4768 scope.go:117] "RemoveContainer" containerID="2fc84baa4d4756a79da2fde2a0c857acfa9a82253568b40c486332ef95074506" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.476872 4768 scope.go:117] "RemoveContainer" containerID="c3ffe3eb3f3209e564e9197e44729632086a88fdabdccda1ab970d8dcf1fcf25" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.510474 4768 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q7kp8"] Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.521200 4768 scope.go:117] "RemoveContainer" containerID="2cb4c28de13aa896e5363b20aa4dec3ce331d8a87ca4f8f64d26399294ceee5b" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.524818 4768 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q7kp8"] Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.552843 4768 scope.go:117] "RemoveContainer" containerID="2fc84baa4d4756a79da2fde2a0c857acfa9a82253568b40c486332ef95074506" Feb 23 19:46:50 crc kubenswrapper[4768]: E0223 19:46:50.553478 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fc84baa4d4756a79da2fde2a0c857acfa9a82253568b40c486332ef95074506\": container with ID starting with 2fc84baa4d4756a79da2fde2a0c857acfa9a82253568b40c486332ef95074506 not found: ID does not exist" containerID="2fc84baa4d4756a79da2fde2a0c857acfa9a82253568b40c486332ef95074506" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.553609 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fc84baa4d4756a79da2fde2a0c857acfa9a82253568b40c486332ef95074506"} err="failed to get container status \"2fc84baa4d4756a79da2fde2a0c857acfa9a82253568b40c486332ef95074506\": rpc error: code = NotFound desc = could not find container \"2fc84baa4d4756a79da2fde2a0c857acfa9a82253568b40c486332ef95074506\": container with ID starting with 2fc84baa4d4756a79da2fde2a0c857acfa9a82253568b40c486332ef95074506 not found: ID does not exist" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.553641 4768 scope.go:117] "RemoveContainer" containerID="c3ffe3eb3f3209e564e9197e44729632086a88fdabdccda1ab970d8dcf1fcf25" Feb 23 19:46:50 crc kubenswrapper[4768]: E0223 19:46:50.554030 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3ffe3eb3f3209e564e9197e44729632086a88fdabdccda1ab970d8dcf1fcf25\": container with ID starting with c3ffe3eb3f3209e564e9197e44729632086a88fdabdccda1ab970d8dcf1fcf25 not found: ID does not exist" containerID="c3ffe3eb3f3209e564e9197e44729632086a88fdabdccda1ab970d8dcf1fcf25" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.554073 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3ffe3eb3f3209e564e9197e44729632086a88fdabdccda1ab970d8dcf1fcf25"} err="failed to get container status \"c3ffe3eb3f3209e564e9197e44729632086a88fdabdccda1ab970d8dcf1fcf25\": rpc error: code = NotFound desc = could not find container \"c3ffe3eb3f3209e564e9197e44729632086a88fdabdccda1ab970d8dcf1fcf25\": container with ID starting with c3ffe3eb3f3209e564e9197e44729632086a88fdabdccda1ab970d8dcf1fcf25 not found: ID does not exist" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.554098 4768 scope.go:117] "RemoveContainer" containerID="2cb4c28de13aa896e5363b20aa4dec3ce331d8a87ca4f8f64d26399294ceee5b" Feb 23 19:46:50 crc kubenswrapper[4768]: E0223 19:46:50.554520 4768 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cb4c28de13aa896e5363b20aa4dec3ce331d8a87ca4f8f64d26399294ceee5b\": container with ID starting with 2cb4c28de13aa896e5363b20aa4dec3ce331d8a87ca4f8f64d26399294ceee5b not found: ID does not exist" containerID="2cb4c28de13aa896e5363b20aa4dec3ce331d8a87ca4f8f64d26399294ceee5b" Feb 23 19:46:50 crc kubenswrapper[4768]: I0223 19:46:50.554602 4768 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cb4c28de13aa896e5363b20aa4dec3ce331d8a87ca4f8f64d26399294ceee5b"} err="failed to get container status \"2cb4c28de13aa896e5363b20aa4dec3ce331d8a87ca4f8f64d26399294ceee5b\": rpc error: code = NotFound desc = could not find container \"2cb4c28de13aa896e5363b20aa4dec3ce331d8a87ca4f8f64d26399294ceee5b\": container with ID starting with 2cb4c28de13aa896e5363b20aa4dec3ce331d8a87ca4f8f64d26399294ceee5b not found: ID does not exist" Feb 23 19:46:51 crc kubenswrapper[4768]: I0223 19:46:51.318727 4768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa525f5-efb8-4eee-b8d0-e58ba9263c38" path="/var/lib/kubelet/pods/7fa525f5-efb8-4eee-b8d0-e58ba9263c38/volumes" Feb 23 19:47:04 crc kubenswrapper[4768]: I0223 19:47:04.308576 4768 scope.go:117] "RemoveContainer" containerID="bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4" Feb 23 19:47:04 crc kubenswrapper[4768]: E0223 19:47:04.310083 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:47:19 crc kubenswrapper[4768]: I0223 19:47:19.307202 4768 scope.go:117] "RemoveContainer" containerID="bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4" Feb 23 19:47:19 crc kubenswrapper[4768]: E0223 19:47:19.308013 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:47:33 crc kubenswrapper[4768]: I0223 19:47:33.308719 4768 scope.go:117] "RemoveContainer" containerID="bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4" Feb 23 19:47:33 crc kubenswrapper[4768]: E0223 19:47:33.311281 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7" Feb 23 19:47:48 crc kubenswrapper[4768]: I0223 19:47:48.308032 4768 scope.go:117] "RemoveContainer" containerID="bfcfcb52996549ab237d18fa2c1b23107f1ba4ab102bdc550576299a46a4bfb4" Feb 23 19:47:48 crc kubenswrapper[4768]: E0223 19:47:48.308949 4768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-zckb9_openshift-machine-config-operator(ed422723-0e38-45bc-a0d9-c4c51d3f2dc7)\"" pod="openshift-machine-config-operator/machine-config-daemon-zckb9" podUID="ed422723-0e38-45bc-a0d9-c4c51d3f2dc7"